Preconfigured edge_ngram
tokenizer has incorrect defaults
#43582
Labels
:Search Relevance/Analysis
How text is split into tokens
Team:Search Relevance
Meta label for the Search Relevance team in Elasticsearch
The docs state:
This is corrrect if you define a new tokenizer of type edge_ngram, like so:
However, if you instead use the pre-configured
edge_ngram
tokenizer, you only get ngrams of size 1:We should change the preconfigured filter to correspond to the documentation
The text was updated successfully, but these errors were encountered: