-
Notifications
You must be signed in to change notification settings - Fork 25.8k
Description
Elasticsearch version 7.7.0
Steps to reproduce:
When I try to create an index with the following script:
{ "settings": { "index": { "number_of_shards": 3, "number_of_replicas": 0 }, "analysis": { "analyzer": { "text": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "snowball", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "custom_analyzer": { "tokenizer": "my_tokenizer", "filter": [ "lowercase", "trim", "unicode_truncate" ], "char_filter": [ "html_strip", "quotes" ] }, "analyzer_keyword": { "tokenizer": "keyword", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "normalizer": { "keyword_normalizer": { "type": "custom", "filter": [ "lowercase", "trim", "unicode_truncate" ] } }, "filter": { "unicode_truncate": { "type": "length", "max": 8191 } }, "char_filter": { "quotes": { "mappings": [ "\\u0091=>'", "\\u0092=>'", "\\u2018=>'", "\\u2019=>'", "\\uf0b7=>\\u0020" ], "type": "mapping" } }, "tokenizer": { "my_tokenizer": { "type": "pattern", "pattern": "[.;:\\s]*[ ,!?;\n\t\r]" } } } }, "mappings": { "dynamic": false, "properties": { "Status": { "type": "text", "analyzer": "text", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } }, "Ts": { "type": "long" }, "AddedBy": { "type": "integer" }, "AddedOn": { "type": "date", "format": "strict_date_optional_time||epoch_millis" }, "Address1": { "type": "text", "analyzer": "analyzer_keyword" }, "Address2": { "type": "text", "analyzer": "analyzer_keyword" }, "City": { "type": "text", "analyzer": "custom_analyzer", "fields": { "raw": { "type": "text", "analyzer": "analyzer_keyword" } } } } } }
it gives me an error, saying:
{ "error": { "root_cause": [ { "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" } ], "type": "illegal_argument_exception", "reason": "Custom normalizer [keyword_normalizer] may not use filter [unicode_truncate]" }, "status": 400 }
I believe that there is no reason of why length filter shouldnt work for normalizer