Skip to content

Commit

Permalink
Merge branch 'main' into correct-json-extension-for-sample-ingest-data
Browse files Browse the repository at this point in the history
  • Loading branch information
hainenber committed Sep 22, 2024
2 parents 02c993c + 7d3a3f5 commit b9138d9
Show file tree
Hide file tree
Showing 104 changed files with 3,247 additions and 357 deletions.
5 changes: 4 additions & 1 deletion .github/vale/styles/Vocab/OpenSearch/Words/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -77,8 +77,9 @@ Levenshtein
[Mm]ultivalued
[Mm]ultiword
[Nn]amespace
[Oo]versamples?
[Oo]ffline
[Oo]nboarding
[Oo]versamples?
pebibyte
p\d{2}
[Pp]erformant
Expand All @@ -102,8 +103,10 @@ p\d{2}
[Rr]eenable
[Rr]eindex
[Rr]eingest
[Rr]eprovision(ed|ing)?
[Rr]erank(er|ed|ing)?
[Rr]epo
[Rr]escor(e|ed|ing)?
[Rr]ewriter
[Rr]ollout
[Rr]ollup
Expand Down
1 change: 1 addition & 0 deletions _about/version-history.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ permalink: /version-history/

OpenSearch version | Release highlights | Release date
:--- | :--- | :---
[2.17.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.17.0.md) | Includes disk-optimized vector search, binary quantization, and byte vector encoding in k-NN. Adds asynchronous batch ingestion for ML tasks. Provides search and query performance enhancements and a new custom trace source in trace analytics. Includes application-based configuration templates. For a full list of release highlights, see the Release Notes. | 17 September 2024
[2.16.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.16.0.md) | Includes built-in byte vector quantization and binary vector support in k-NN. Adds new sort, split, and ML inference search processors for search pipelines. Provides application-based configuration templates and additional plugins to integrate multiple data sources in OpenSearch Dashboards. Includes an experimental Batch Predict ML Commons API. For a full list of release highlights, see the Release Notes. | 06 August 2024
[2.15.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.15.0.md) | Includes parallel ingestion processing, SIMD support for exact search, and the ability to disable doc values for the k-NN field. Adds wildcard and derived field types. Improves performance for single-cardinality aggregations, rolling upgrades to remote-backed clusters, and more metrics for top N queries. For a full list of release highlights, see the Release Notes. | 25 June 2024
[2.14.0](https://github.com/opensearch-project/opensearch-build/blob/main/release-notes/opensearch-release-notes-2.14.0.md) | Includes performance improvements to hybrid search and date histogram queries with multi-range traversal, ML model integration within the Ingest API, semantic cache for LangChain applications, low-level vector query interface for neural sparse queries, and improved k-NN search filtering. Provides an experimental tiered cache feature. For a full list of release highlights, see the Release Notes. | 14 May 2024
Expand Down
135 changes: 135 additions & 0 deletions _analyzers/token-filters/asciifolding.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,135 @@
---
layout: default
title: ASCII folding
parent: Token filters
nav_order: 20
---

# ASCII folding token filter

The `asciifolding` token filter converts non-ASCII characters to their closest ASCII equivalents. For example, *é* becomes *e*, *ü* becomes *u*, and *ñ* becomes *n*. This process is known as *transliteration*.


The `asciifolding` token filter offers a number of benefits:

- **Enhanced search flexibility**: Users often omit accents or special characters when entering queries. The `asciifolding` token filter ensures that such queries still return relevant results.
- **Normalization**: Standardizes the indexing process by ensuring that accented characters are consistently converted to their ASCII equivalents.
- **Internationalization**: Particularly useful for applications including multiple languages and character sets.

While the `asciifolding` token filter can simplify searches, it may also lead to the loss of specific information, particularly if the distinction between accented and non-accented characters in the dataset is significant.
{: .warning}

## Parameters

You can configure the `asciifolding` token filter using the `preserve_original` parameter. Setting this parameter to `true` keeps both the original token and its ASCII-folded version in the token stream. This can be particularly useful when you want to match both the original (with accents) and the normalized (without accents) versions of a term in a search query. Default is `false`.

## Example

The following example request creates a new index named `example_index` and defines an analyzer with the `asciifolding` filter and `preserve_original` parameter set to `true`:

```json
PUT /example_index
{
"settings": {
"analysis": {
"filter": {
"custom_ascii_folding": {
"type": "asciifolding",
"preserve_original": true
}
},
"analyzer": {
"custom_ascii_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"custom_ascii_folding"
]
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /example_index/_analyze
{
"analyzer": "custom_ascii_analyzer",
"text": "Résumé café naïve coördinate"
}
```
{% include copy-curl.html %}

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "resume",
"start_offset": 0,
"end_offset": 6,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "résumé",
"start_offset": 0,
"end_offset": 6,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "cafe",
"start_offset": 7,
"end_offset": 11,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "café",
"start_offset": 7,
"end_offset": 11,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "naive",
"start_offset": 12,
"end_offset": 17,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "naïve",
"start_offset": 12,
"end_offset": 17,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "coordinate",
"start_offset": 18,
"end_offset": 28,
"type": "<ALPHANUM>",
"position": 3
},
{
"token": "coördinate",
"start_offset": 18,
"end_offset": 28,
"type": "<ALPHANUM>",
"position": 3
}
]
}
```


160 changes: 160 additions & 0 deletions _analyzers/token-filters/cjk-bigram.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
---
layout: default
title: CJK bigram
parent: Token filters
nav_order: 30
---

# CJK bigram token filter

The `cjk_bigram` token filter is designed specifically for processing East Asian languages, such as Chinese, Japanese, and Korean (CJK), which typically don't use spaces to separate words. A bigram is a sequence of two adjacent elements in a string of tokens, which can be characters or words. For CJK languages, bigrams help approximate word boundaries and capture significant character pairs that can convey meaning.


## Parameters

The `cjk_bigram` token filter can be configured with two parameters: `ignore_scripts`and `output_unigrams`.

### `ignore_scripts`

The `cjk-bigram` token filter ignores all non-CJK scripts (writing systems like Latin or Cyrillic) and tokenizes only CJK text into bigrams. Use this option to specify CJK scripts to be ignored. This option takes the following valid values:

- `han`: The `han` script processes Han characters. [Han characters](https://simple.wikipedia.org/wiki/Chinese_characters) are logograms used in the written languages of China, Japan, and Korea. The filter can help with text processing tasks like tokenizing, normalizing, or stemming text written in Chinese, Japanese kanji, or Korean Hanja.

- `hangul`: The `hangul` script processes Hangul characters, which are unique to the Korean language and do not exist in other East Asian scripts.

- `hiragana`: The `hiragana` script processes hiragana, one of the two syllabaries used in the Japanese writing system.
Hiragana is typically used for native Japanese words, grammatical elements, and certain forms of punctuation.

- `katakana`: The `katakana` script processes katakana, the other Japanese syllabary.
Katakana is mainly used for foreign loanwords, onomatopoeia, scientific names, and certain Japanese words.


### `output_unigrams`

This option, when set to `true`, outputs both unigrams (single characters) and bigrams. Default is `false`.

## Example

The following example request creates a new index named `devanagari_example_index` and defines an analyzer with the `cjk_bigram_filter` filter and `ignored_scripts` parameter set to `katakana`:

```json
PUT /cjk_bigram_example
{
"settings": {
"analysis": {
"analyzer": {
"cjk_bigrams_no_katakana": {
"tokenizer": "standard",
"filter": [ "cjk_bigrams_no_katakana_filter" ]
}
},
"filter": {
"cjk_bigrams_no_katakana_filter": {
"type": "cjk_bigram",
"ignored_scripts": [
"katakana"
],
"output_unigrams": true
}
}
}
}
}
```
{% include copy-curl.html %}

## Generated tokens

Use the following request to examine the tokens generated using the analyzer:

```json
POST /cjk_bigram_example/_analyze
{
"analyzer": "cjk_bigrams_no_katakana",
"text": "東京タワーに行く"
}
```
{% include copy-curl.html %}

Sample text: "東京タワーに行く"

東京 (Kanji for "Tokyo")
タワー (Katakana for "Tower")
に行く (Hiragana and Kanji for "go to")

The response contains the generated tokens:

```json
{
"tokens": [
{
"token": "",
"start_offset": 0,
"end_offset": 1,
"type": "<SINGLE>",
"position": 0
},
{
"token": "東京",
"start_offset": 0,
"end_offset": 2,
"type": "<DOUBLE>",
"position": 0,
"positionLength": 2
},
{
"token": "",
"start_offset": 1,
"end_offset": 2,
"type": "<SINGLE>",
"position": 1
},
{
"token": "タワー",
"start_offset": 2,
"end_offset": 5,
"type": "<KATAKANA>",
"position": 2
},
{
"token": "",
"start_offset": 5,
"end_offset": 6,
"type": "<SINGLE>",
"position": 3
},
{
"token": "に行",
"start_offset": 5,
"end_offset": 7,
"type": "<DOUBLE>",
"position": 3,
"positionLength": 2
},
{
"token": "",
"start_offset": 6,
"end_offset": 7,
"type": "<SINGLE>",
"position": 4
},
{
"token": "行く",
"start_offset": 6,
"end_offset": 8,
"type": "<DOUBLE>",
"position": 4,
"positionLength": 2
},
{
"token": "",
"start_offset": 7,
"end_offset": 8,
"type": "<SINGLE>",
"position": 5
}
]
}
```


Loading

0 comments on commit b9138d9

Please sign in to comment.