These tokenizers use Lucene's fast DFA implementation for creating tokens, and are new in 6.5.0 (and specifically in the last upgrade: https://github.com/elastic/elasticsearch/pull/23362) I won't have time to work on this but wanted to get the issue up so we don't forget.
These tokenizers use Lucene's fast DFA implementation for creating tokens, and are new in 6.5.0 (and specifically in the last upgrade: #23362)
I won't have time to work on this but wanted to get the issue up so we don't forget.