-
Notifications
You must be signed in to change notification settings - Fork 8.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detokenizer fixes #8039
base: master
Are you sure you want to change the base?
Detokenizer fixes #8039
Conversation
Initial detokenizer state:
|
Real initial state:
|
Add detokenizer checks New generator: ascii_lr_strip New generator: apostrophe Add more vocabs files
Brute force encoding and decoding tests (number of errors,
|
Improvements:
|
I gave it a test run using phi3-mini-instruct
Now I ran the same through llama.cpp tokenization:
Update: I mimicked the python tokenizer by adding that into llama.cpp:
right before
Update2
That would result in identical tokenization:
|
Useful when automating tests: - If you don't know in advance the vocab type. - Differenciate other loading errors.
Using exit() is throwing random exceptions
UNKNOWN and CONTROL are 'special pieces'. Remove space after UNKNOWN and CONTROL. Refactor llama_token_to_piece().
The models baichuan, falcon and mpt have tokenizations errors, so detokenization fails too.
|
Not all special tokens, see the attributes {
"id": 32007,
"content": "<|end|>",
"single_word": false,
"lstrip": false,
"rstrip": TRUE,
"normalized": false,
"special": true
}, You can see the Lines 5202 to 5217 in e112b61
If I tried you example and got another result! |
@jaime-m-p
I'll repeat the tests after fixing those issues and reverting my changes. Given your results that's promising. It's a little troublesome that such errors can very easily sneak into a model and it's very hard to notice them, even harder to fix them without blindly recreating the model from originals. |
Hi @jaime-m-p and @cmp-nct, really grateful you both are looking into this! I'm traveling without reliable access to a computer at the moment, but wanted to ask if these fixes now keep stability on retokenization with Phi-3 (i.e. the roundtrip of text -> tokens -> text -> tokens results in the same tokens). The constant whitespace insertion on each cycle was causing serious kv-cache reuse issues on our side and I'm really hopeful that this update resolves it! |
Detokenize special tokens. Replace errors with '\uFFFD' when detokenizing to 'utf-8'. More edge cases. Better detokenization results check.
Overall current tokenize and detokenize state. WPM models (bert-bge, jina-v2-en) are still broken. Probably due to the unicode NFD normalization. BPE models qwen2, olmo and mpt are probably faling due to the missing unicode NFC normalization. All BPE and SPM models seems to detokenize properly. Each cell show the number of tokenization and detokenization errros (up to 10). Empty cell means 0 errors.
|
AutoTokenizer is not completing this roundtrip either for some models. llama-bpe
' \x00z \x07z \x0ez \x15z \x1cz z !z "z $z %z &z (z )z *z +z ,z -' # input text
'<|begin_of_text|> \x00z \x07z \x0ez \x15z \x1cz z!z "z $z %z &z (z )z *z +z,z -' # AutoTokenizer
'<|begin_of_text|> \x00z \x07z \x0ez \x15z \x1cz z!z "z $z %z &z (z )z *z +z,z -' # Llama.cpp phi-3
' \x00z \x07z \x0ez \x15z \x1cz z !z "z $z %z &z (z )z *z +z ,z -' # input text
'<s> \x00z \x07z \x0ez \x15z \x1cz z !z "z $z %z &z (z )z *z +z ,z -' # AutoTokenizer
'<s> \x00z \x07z \x0ez \x15z \x1cz z !z "z $z %z &z (z )z *z +z ,z -' # Llama.cpp llama-bpe removes spaces before some punctuation characters. Re-tokenization is different. Probably a few models can achieve this, but Information can be lost in tokenization (normalization, lstrip, rstrip, etc). |
Hmm, great point. I think what I'm really hoping for is eventual stability on the second or third tokenize/detokenize cycles -- before your PR, Phi-3 had the problem of constantly changing the token_id at index 1 (due to growing spaces), which really caused issues. I think this set of changes is good enough to solve most of our problems :). |
This PR tries to solve most common problems with detokenization (ie: spaces after special tokens).
Related issues: #8023, #7938.