Skip to content

Request for pre-tokenizer that creates words based length alone. #1697

@filbeofITK

Description

@filbeofITK

Hello! I would like to request a fast pre-tokenizer to be implemented, which only splits the input to continuous pre-defined length segments. I know that this is not a common issue in NLP, but for my use-case it is necessary. I'm trying to process DNA data and that has no spaces or any type of separators, so I'm trying to use fixed length tokens.

Implementing this for someone that actually knows Rust and the backend would probably take less than half an hour but I don't want to learn a new language for this.

Biggest thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions