Commit Graph

1220 Commits

Author SHA1 Message Date
henk717 025db3bd04
Merge pull request #138 from VE-FORBRYDERNE/lazy-loader
Fix for lazy loader in PyTorch 1.12
2022-07-12 23:02:58 +02:00
henk717 836759d826
Merge pull request #137 from VE-FORBRYDERNE/jaxlib
TPU Colab hotfix
2022-07-12 23:02:40 +02:00
vfbd 39d48495ce Fix for lazy loader in PyTorch 1.12
There is no `torch._StorageBase` in PyTorch 1.12, but otherwise it still
works.
2022-07-12 16:48:01 -04:00
vfbd 70aa182671 Restrict jaxlib version in TPU Colabs 2022-07-12 16:30:26 -04:00
henk717 dd6da50e58
Merge pull request #136 from VE-FORBRYDERNE/opt
Fix base OPT-125M and finetuned OPT models in Colab TPU instances
2022-07-05 21:35:39 +02:00
vfbd 2a78b66932 Fix base OPT-125M and finetuned OPT models in Colab TPU instances 2022-07-05 15:28:58 -04:00
vfbd c94f875608 Fix Z algorithm in basic phrase bias script 2022-07-05 14:43:58 -04:00
Henk d041ec0921 Safer defaults and more flexibility
There have been a lot of reports from newer users who experience AI breakdown because not all models properly handle 2048 max tokens. 1024 is the only value that all models support and was the original value KoboldAI used. This commit reverts the decision to increase this to 2048, any existing configurations are not effected. Users who wish to increase the max tokens can do so themselves. Most models handle up to 1900 well (The GPT2 models are excluded), for many you can go all the way. (It is currently not yet known why some finetunes cause a decrease in maxtoken support,

In addition this commit contains a request for more consistent slider behavior, allowing the sliders to be changed at 0.01 intervals instead of some sliders being capped to 0.05.
2022-07-03 15:07:54 +02:00
Henk e2f7fed99f Don't turn gamestarted off 2022-07-02 12:59:14 +02:00
henk717 90c5cebb6d
Merge pull request #134 from VE-FORBRYDERNE/editor
Editor fixes
2022-07-01 20:32:18 +02:00
vfbd c336a43544 Fix some remaining editor whitespace-fixing issues 2022-07-01 13:45:57 -04:00
vfbd c3eade8046 Fix editor bug in iOS when adding newline at end of the last action
Not only does iOS also have that issue that Chromium-based browsers
have, but it also has a different issue where it selects all text in the
last chunk of your story, so I added some code to deselect the text in
that case.
2022-07-01 13:12:57 -04:00
vfbd 1ff0a4b9a9 Submit button now waits for inlineedit/inlinedelete commands 2022-06-30 12:23:06 -04:00
vfbd cccf8296fc Fix enter key sometimes putting two newlines in editor
This happens when, in a Chromium-based browser, you try to insert a
newline at the end of the last action of your story.
2022-06-30 12:03:39 -04:00
vfbd ce5f4d3dda Click on blank part of editor to defocus in Chromium based browsers
In Chromium based browsers you can now click the blank part of the
editor to submit changes. This is to maintain consistency with the
editor behaviour in Firefox which already did this when you clicked on
the blank part of the editor.
2022-06-30 11:31:54 -04:00
vfbd accbaea991 Fix a problem where a story with only the prompt cannot be edited 2022-06-30 11:08:22 -04:00
henk717 856e8d5c86
Merge pull request #133 from VE-FORBRYDERNE/whitespace
Fix the cleanupChunkWhitespace function
2022-06-28 18:22:24 +02:00
vfbd d55b4d9bbc Fix the cleanupChunkWhitespace function 2022-06-28 12:18:57 -04:00
henk717 ad9ec6eaba
Merge pull request #132 from VE-FORBRYDERNE/whitespace-cleanup
Story whitespace cleanup backport
2022-06-26 20:35:50 +02:00
vfbd 6576f5c01d Make sure editor changes are applied before submitting
(cherry picked from commit ae41ad298c)
2022-06-26 14:06:57 -04:00
vfbd ebba79fed6 Remove trailing whitespace from submissions
(cherry picked from commit b99d1449c9)
2022-06-26 14:06:34 -04:00
vfbd 6ba7429eea Don't add sentence spacing if submission is empty
When you retry, it actually sends an empty submission, so if you have
add sentence spacing on, retrying could add an extra action with a
single space.

(cherry picked from commit 151407a001)
2022-06-26 14:06:18 -04:00
vfbd 9d09ae5fea Clean up whitespace in the editor as well
(cherry picked from commit 6e138db1c0)
2022-06-26 14:05:04 -04:00
vfbd 2a4d37ce60 Clean up whitespace at the end of actions when loading story
Specifically, we merge blank actions into the next action and we move
whitespace at the end of non-blank actions to the beginning of the next
action.

(cherry picked from commit 4b16600e49)
2022-06-26 14:04:36 -04:00
vfbd e30abd209f Preserve whitespace in the editor
(cherry picked from commit 793d788706)
2022-06-26 14:04:16 -04:00
Henk 9e7eb80db4 Nerys V2 part 2 2022-06-25 14:03:19 +02:00
Henk ecc6ee9474 Nerys V2 2022-06-25 13:47:49 +02:00
henk717 a2d2ea0735 Typo Fix 2022-06-25 13:14:27 +02:00
henk717 2495b9380d Nerys V2 2022-06-25 13:08:30 +02:00
Henk 8be0964427 AIDG Import Fix 2022-06-24 18:29:06 +02:00
henk717 27e16aecf2 Model Cleaner (suggested by Shrinkarom) 2022-06-24 14:03:58 +02:00
henk717 c3af92e9af Koboldai.org | New NeoX | Cloudflare 2022-06-24 10:43:06 +02:00
henk717 ec0bc1cc17
Merge pull request #130 from VE-FORBRYDERNE/neox-badwords
GPT-NeoX HF model badwords fix
2022-06-23 21:05:42 +02:00
vfbd 3da885d408 GPT-NeoX HF model badwords fix 2022-06-23 15:02:43 -04:00
henk717 d94f29a68a
Merge pull request #127 from VE-FORBRYDERNE/tracer
Fix JAX UnexpectedTracerError
2022-06-23 19:29:51 +02:00
henk717 1d41966d88
Merge pull request #129 from VE-FORBRYDERNE/budget
Account for lnheader in budget calculation
2022-06-23 12:39:20 +02:00
vfbd 0eb9f8a879 Account for lnheader in budget calculation 2022-06-22 19:16:24 -04:00
henk717 5f9a116052
Merge pull request #128 from VE-FORBRYDERNE/neox
TPU support for HF GPT-NeoX model
2022-06-22 01:39:53 +02:00
Gnome Ann 8c594c6869 Correct the padding token for GPT-NeoX 2022-06-21 19:37:43 -04:00
Gnome Ann a7f667c34c Use NeoX badwords when loading from HF GPT-NeoX model 2022-06-21 19:33:25 -04:00
Gnome Ann 5e3c7c07ae Merge branch 'main' into neox 2022-06-21 19:30:51 -04:00
Henk 75bc472a9f Transformers bump to 4.20.1
Transformers issued an important change for the OPT models breaking their compatibility with all older versions. In order for people to be able to use all models on the menu they need 4.20.1 so this is now forced in the dependencies making the update easier.
2022-06-21 23:33:38 +02:00
henk717 2be1f5088f
Merge pull request #126 from VE-FORBRYDERNE/opt
Update OPT models and fix 20B model on TPU
2022-06-21 23:19:03 +02:00
Gnome Ann 33a2a318db Fix 20B TPU model 2022-06-21 17:16:01 -04:00
Gnome Ann a7e3ef71aa Add final layer norm to OPT 2022-06-21 16:36:26 -04:00
henk717 a10446f258
Merge pull request #123 from VE-FORBRYDERNE/tokenizer
Fix OPT tokenization problems
2022-06-18 11:38:14 +02:00
Gnome Ann 5e71f7fe97 Use slow tokenizer if fast tokenizer is not available 2022-06-17 21:08:37 -04:00
Gnome Ann f71bae254a Fix OPT tokenization problems 2022-06-17 13:29:42 -04:00
Henk 49a3cf132e Require accelerate
Transformers 4.20 now requires accelerate to be installed for some of the features we use in KoboldAI. This is now a required dependency for updated users.
2022-06-16 20:38:58 +02:00
Henk 3504581015 Transformers dependency bump
Makes transformers 4.20 mandatory in the dependency lists, not because the old versions are no longer supported but because it contains fixes that benefit our users and this makes it easier for them to update to it. If you stick to an older version the OPT and XGLM workarounds we have in place will remain functional, but you miss on the enhancements newer transformers versions bring.
2022-06-16 19:52:04 +02:00