Cohee
|
fedc3b887f
|
Add llama2 tokenizer for OpenRouter models
|
2023-11-05 21:54:19 +02:00 |
Cohee
|
c2ba3a773a
|
Delayed tokenizers initialization
|
2023-10-25 00:32:49 +03:00 |
Cohee
|
b167eb9e22
|
Add raw token ids support to OAI logit bias. Fix token counting for turbo models
|
2023-10-19 13:37:08 +03:00 |
Cohee
|
bfdd071001
|
Move tokenizer endpoint and functions to separate file
|
2023-09-16 18:48:06 +03:00 |
Cohee
|
853736fa93
|
Remove legacy NovelAI models
|
2023-09-06 14:32:06 +03:00 |
Cohee
|
267d0eb16f
|
Fix API tokenizers usage with kcpp
|
2023-09-01 02:57:35 +03:00 |
Cohee
|
3b4e6f0b78
|
Add debug functions menu
|
2023-08-27 23:20:43 +03:00 |
Cohee
|
0844374de5
|
Remove old GPT-2 tokenizer. Redirect to tiktoken's tokenizer
|
2023-08-27 22:14:39 +03:00 |
Cohee
|
9660aaa2c2
|
Add NovelAI hypebot plugin
|
2023-08-27 18:27:34 +03:00 |
Cohee
|
c91ab3b5e0
|
Add Kobold tokenization to best match logic. Fix not being able to stop group chat regeneration
|
2023-08-24 21:23:35 +03:00 |
Cohee
|
ab52af4fb5
|
Add support for Koboldcpp tokenization endpoint
|
2023-08-24 20:19:57 +03:00 |
Cohee
|
e77da62b85
|
Add padding to cache key. Fix Safari display issues. Fix 400 on empty translate. Reset bias cache on changing model.
|
2023-08-23 10:32:48 +03:00 |
Cohee
|
bc5fc67906
|
Put tokenizer functions to a separate file. Cache local models token counts
|
2023-08-23 02:38:43 +03:00 |