Commit Graph

36 Commits

Author SHA1 Message Date
Gnome Ann 2d3db7b4ba Implement support for sampler order in the backend code 2022-06-13 19:12:23 -04:00
Gnome Ann 6e82f205b4 Aria2 bug fix for Windows users 2022-05-14 11:44:28 -04:00
Gnome Ann 0c5ca5261e Loading a sharded model will now display only one progress bar 2022-05-13 23:32:16 -04:00
Gnome Ann 91d3672446 Proper progress bar for aria2 downloads 2022-05-13 17:00:10 -04:00
henk717 7ea0c49c1a
Merge pull request #128 from VE-FORBRYDERNE/opt
OPT breakmodel and TPU support
2022-05-13 18:07:02 +02:00
Henk 8376f12e21 Add NS mode
OPT supports newlines, but it also needs some of the behavior we use in S mode. NS mode is a more limited version of S mode that still handles the </s> token, but instead of replacing it with a new line we replace it empty and newlines are not converted.

In future if your Fairseq style model has newline support use NS mode, while if it needs artifically inserted newlines use S mode. This also means that people finetuning fairseq models to include newlines might benefit from testing their models on ns mode.
2022-05-13 10:44:12 +02:00
Gnome Ann defbb53b68 OPT breakmodel 2022-05-13 01:03:38 -04:00
Gnome Ann b1d8797a54 Allow TPU Colab to load sharded HF models 2022-05-12 23:51:40 -04:00
Gnome Ann 580dd0b2a3 Handle aria2 properly when it exits with nonzero exit code 2022-05-11 16:23:24 -04:00
Gnome Ann 2ebba9488b Change `force_download` back to False
This is to prevent fully downloaded models from being re-downloaded in
Colab.
2022-05-11 15:51:48 -04:00
Gnome Ann 6d481ca57e Merge branch 'united' into aria2 2022-05-11 15:51:11 -04:00
Gnome Ann c65272052a aria2 now downloads to different filename and renames afterwards
This is to match the behaviour of the original transformers downloader
in order to deal with the rare case of someone downloading a model using
aria2, cancelling before it finishes, and then attempting to resume the
download with the normal transformers downloader.
2022-05-11 15:45:38 -04:00
Henk 6d27084e8a Better Aria2 Defaults
Trunc prevents slow allocation on windows, force_download=True has proven a more reliable default. Since models are converted to local formats it does not impact local users. And because -c is used the impact of checking if the model is correct is desirable and minimal.
2022-05-11 21:38:33 +02:00
Gnome Ann 7a3f865e3f Prevent aria2 from resuming cancelled downloads
Resumed downloads tend to be very slow.

The original transformers downloader didn't allow resuming downloads
either.
2022-05-11 15:14:37 -04:00
Gnome Ann c81f3bd084 Use `--file-allocation=trunc` instead of `--file-allocation=none` 2022-05-11 14:51:43 -04:00
Gnome Ann f96c878d83 Use aria2 even when all model files are already in cache
This allows aria2 to continue downloading a pytorch_model.bin after a
cancelled download.
2022-05-11 14:43:56 -04:00
Gnome Ann f60c7d8492 Fix the behaviour of `aria2_hook()` when using `force_download` 2022-05-11 14:41:34 -04:00
Gnome Ann 5732a8f15a Don't use `aria2_hook()` if `force_download=True` is used 2022-05-11 14:40:31 -04:00
Gnome Ann 22b4f3c9df Bug fixes for `aria2_hook()` when running Windows 2022-05-11 00:14:00 -04:00
Gnome Ann 82205722af Fix logic of `aria2_hook()` 2022-05-10 23:46:29 -04:00
Gnome Ann 4b693b4858 Fix the logic of `force_download` in utils.py 2022-05-10 22:47:03 -04:00
Gnome Ann c1ef20bcff Also enable aria2 downloading for non-sharded checkpoints 2022-05-10 22:43:41 -04:00
Gnome Ann e115bb68e4 aria2 downloads in utils.py now use correct user agent 2022-05-10 22:22:46 -04:00
Gnome Ann a388c63023 Use aria2 to download split checkpoints 2022-05-10 21:28:13 -04:00
Gnome Ann f682c1229a Fix fairseq newline handling issues 2022-02-12 13:23:59 -05:00
Gnome Ann 8742453f95 Add safeguards for token budget and text formatting
* Error messages are now shown when memory, author's note, etc. exceeds
  budget by itself
* Formatting options no longer break if there are empty chunks in the
  story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717 7b73d7cfdd Single Line Mode
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.

This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
Gnome Ann c276220a35 Allow deleting and renaming stories in the browser 2021-08-31 18:22:30 -04:00
Gnome Ann 2f4f7ac92a Fix error when you use "Add sentence spacing" 2021-08-28 18:54:10 -04:00
henk717 00414d26e2 Integrated VE_FORBRYDERNE's Adventure Mode + Cleanup
Adventure Mode allows you to play this like AID, perfect for Choose your own Adventure models
2021-08-19 13:18:01 +02:00
KoboldAI Dev f9bbb174a6 Added OpenAI API support
Added in-browser Save/Load/New Story controls
(Force a full refresh in your browser!)
Fixed adding InferKit API key if client.settings already exists
Added cmd calls to bat files so they'll stay open on error
Wait animation now hidden on start state/restart
2021-05-22 05:28:40 -04:00
KoboldAI Dev 4996e0ff46 Bugfixes:
Improvements to pruning context from text returned from the AI
Colab errors should no longer throw JSON decode errors in client
Improved logic for World Info scanning
Fix for index error in addsentencespacing
2021-05-18 17:59:59 -04:00
KoboldAI Dev 0e038b8727 Bugfix for Add Sentence Spacing format option 2021-05-14 02:24:05 -04:00
KoboldAI Dev c0736a8ec7 Added World Info
Added additional punctuation triggers for Add Sentence Spacing format
Added better screen reset logic when refresing screen or restarting server
2021-05-13 01:26:42 -04:00
KoboldAI Dev b55266a7c8 Added Formatting options
Added Bootstrap toggle library for UI
Added injection points for input/output modification
2021-05-10 19:17:10 -04:00
KoboldAI Dev d632976fbf Settings menu modularized.
Help text added to settings items.
Settings now saved to client file when changed.
Separated transformers settings and InferKit settings.
Reorganized model select list.
2021-05-07 14:32:10 -04:00