Commit Graph

685 Commits

Author SHA1 Message Date
ebolam dd07b10b73 Fix for model loading on web ui and removing AI menu when using remote/colab methods 2022-06-06 13:57:19 -04:00
ebolam c984f4412d Fix for web based model loading 2022-06-06 12:49:40 -04:00
ebolam 1e139594a9 Merge commit 'refs/pull/7/head' of https://github.com/ebolam/KoboldAI into HEAD 2022-06-06 09:49:46 -04:00
Henk e5dcf91a08 Defaults Support
This adds support for loading settings from the defaults folder, settings are loaded in the following order and overwritten if needed by the higher number.

1. The model config file.
2. The defaults folder.
3. The users defined settings file.

With this support we can begin to ship better defaults for models we do not manage. Our community tuners have been most helpful at adding good defaults to their configuration files, but for other models such as the base models this gives us the flexibility to define better settings for each model without messing with a users desired settings if they already exist.
2022-06-01 10:34:16 +02:00
Gnome Ann 707316de31 Kaggle TPU support 2022-05-31 12:20:16 -04:00
Henk 1a1f2f6428 30B ram requirements 2022-05-31 13:17:06 +02:00
Gnome Ann 69da5b7bc2 Update list of transformers versions that have broken OPT 2022-05-28 23:44:19 -04:00
Henk 4b65ce9c76 1.18 version bump 2022-05-28 19:39:05 +02:00
Henk b30370bf4b 2048 maxtoken default
Almost everyone prefers 2048 max tokens because of the superior coherency. It should only be lower due to ram limits, but the menu already shows the optimal ram for 2048. Negatively effected users can turn it down themselves, for everyone else especially on rented machines or colab 2048 is a better default.
2022-05-27 01:23:48 +02:00
Gnome Ann c692987e40 Fix an error that occurs when loading GPT-2 models
I forgot that this new_rebuild_tensor function's first argument's type
is different when loading GPT-2 models.
2022-05-20 14:54:49 -04:00
Julius ter Pelkwijk 6ae7b48b69
Adding Nerys model 13B 2022-05-18 13:50:57 +02:00
Julius ter Pelkwijk f0df3de610
Adding Nerys model 2.7B 2022-05-16 09:50:45 +02:00
Gnome Ann d5ab3ef5b1 Fix `no attribute get_checkpoint_shard_files` 2022-05-14 11:49:04 -04:00
Gnome Ann 1476e76cfc Copy fp16 model files instead of resaving them 2022-05-14 00:45:43 -04:00
Gnome Ann 0c5ca5261e Loading a sharded model will now display only one progress bar 2022-05-13 23:32:16 -04:00
Gnome Ann f9f1a5f3a9 Make sure tqdm progress bars display properly in Colab 2022-05-13 17:37:45 -04:00
Gnome Ann 91d3672446 Proper progress bar for aria2 downloads 2022-05-13 17:00:10 -04:00
henk717 7ea0c49c1a
Merge pull request #128 from VE-FORBRYDERNE/opt
OPT breakmodel and TPU support
2022-05-13 18:07:02 +02:00
Gnome Ann 1200173386 Custom badwords for OPT
Generated using:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-350m", fast=False)
badwordsids_opt = [[v] for k, v in tokenizer.vocab.items() if any(c in k for c in "<>[]")]
```
2022-05-13 10:45:28 -04:00
Henk d5fa782483 NS Mode (comment fix) 2022-05-13 10:53:19 +02:00
Henk 8376f12e21 Add NS mode
OPT supports newlines, but it also needs some of the behavior we use in S mode. NS mode is a more limited version of S mode that still handles the </s> token, but instead of replacing it with a new line we replace it empty and newlines are not converted.

In future if your Fairseq style model has newline support use NS mode, while if it needs artifically inserted newlines use S mode. This also means that people finetuning fairseq models to include newlines might benefit from testing their models on ns mode.
2022-05-13 10:44:12 +02:00
Gnome Ann 55079f672a Fix typo in soft prompt patching code 2022-05-13 01:51:55 -04:00
Gnome Ann 29bb3f569b Fix a bug in OPTForCausalLM where self.lm_head is the wrong size 2022-05-13 01:37:17 -04:00
Gnome Ann defbb53b68 OPT breakmodel 2022-05-13 01:03:38 -04:00
Gnome Ann b1d8797a54 Allow TPU Colab to load sharded HF models 2022-05-12 23:51:40 -04:00
Gnome Ann 4fa5f1cd6a Add TPU support for OPT-350M
The 350M model seems to have a different structure than the other ones ???
2022-05-12 22:21:15 -04:00
Henk 5c4a087970 Disable S mode for OPT 2022-05-13 01:47:59 +02:00
Henk e98cc3cb16 OPT models 2022-05-12 23:55:21 +02:00
Henk 376e76f5da S mode for OPT 2022-05-12 02:18:14 +02:00
Gnome Ann 46cfa1367f Add `--no_aria2` command line flag 2022-05-11 00:44:56 -04:00
Gnome Ann f09959f9be Fix patching code of `PreTrainedModel.from_pretrained()` 2022-05-11 00:41:53 -04:00
Gnome Ann 4b49d1c464 Make sure `vars.revision` is defined 2022-05-10 22:51:36 -04:00
Gnome Ann b97b2a02d6 Add `--revision` command line flag 2022-05-10 22:14:56 -04:00
Gnome Ann 937d9ee06a Change default `model.save_pretrained` shard size to 500 MiB 2022-05-10 22:04:25 -04:00
Gnome Ann a388c63023 Use aria2 to download split checkpoints 2022-05-10 21:28:13 -04:00
subtlewave 9c83ef7fa9
Replaced Adventure 125M and added C1-1.3B 2022-04-28 22:35:04 +00:00
Gnome Ann ea82867e4d Merge branch 'united' into settings 2022-04-26 13:58:01 -04:00
Henk 11280a6e66 LocalTunnel Linux Fix 2022-04-19 14:41:21 +02:00
Henk b8e79afe5e LocalTunnel support 2022-04-19 13:47:44 +02:00
Gnome Ann c7b03398f6 Merge 'nolialsea/patch-1' into settings without Colab changes 2022-04-17 12:15:36 -04:00
henk717 372eb4c981
Merge pull request #119 from VE-FORBRYDERNE/scripting-sp
Allow userscripts to change the soft prompt
2022-04-14 21:33:20 +02:00
henk717 78d6ee491d
Merge pull request #117 from mrseeker/patch-7
Shinen FSD 13B (NSFW)
2022-04-14 21:33:08 +02:00
henk717 e180db88aa
Merge pull request #118 from VE-FORBRYDERNE/lazy-loader
Fix lazy loader in aiserver.py
2022-04-14 21:33:00 +02:00
Gnome Ann bd6f7798b9 Fix lazy loader in aiserver.py 2022-04-14 14:33:10 -04:00
Julius ter Pelkwijk ad94f6c01c
Shinen FSD 13B (NSFW) 2022-04-14 08:23:50 +02:00
Julius ter Pelkwijk 945c34e822
Shinen FSD 6.7B (NSFW) 2022-04-13 14:47:22 +02:00
Henk eeff126df4 Memory Sizes 2022-04-13 12:41:21 +02:00
Gnome Ann a3a52dc9c3 Add support for changing soft prompt from userscripts 2022-04-12 15:59:05 -04:00
Henk 26909e6cf3 Model Categories 2022-04-10 20:53:15 +02:00
Julius ter Pelkwijk 6fcb0af488
Adding Janeway 13B 2022-04-10 15:03:39 +02:00
Gnome Ann 359a0a1c99 Copy Python 3.6 compatible lazy loader to aiserver.py 2022-04-08 19:40:12 -04:00
Julius ter Pelkwijk 1974761f70
Releasing Janeway 6.7B 2022-04-08 08:13:36 +02:00
Wes Brown 09fee52abd Add `num_seqs` support to GooseAI/OpenAI client handler. 2022-04-07 14:50:23 -04:00
Henky!! 5feda462fb OAI - Fixes last commit 2022-04-07 02:39:37 +02:00
Henky!! 34b6c907f0 OAI Max Token Slider 2022-04-07 02:26:15 +02:00
Henky!! b568e31381 OAI Path Support 2022-04-06 05:15:25 +02:00
Henky!! 699b3fc10b OAI Redo Fixes 2 2022-04-06 04:54:27 +02:00
Henky!! b5a633e69b OAI Redo Fix 2022-04-06 04:45:01 +02:00
henk717 ee682702ee
Merge branch 'KoboldAI:main' into united 2022-04-05 01:35:22 +02:00
Henky!! 8153f21d5c Convo 6B 2022-04-05 01:33:51 +02:00
Henky!! e644963564 OpenAI Fixes 2022-03-28 02:02:37 +02:00
Gnome Ann 20e48b11d7 Typical sampling 2022-03-27 16:25:50 -04:00
Noli aa8de64aa4 fix default port 2022-03-25 23:26:27 +01:00
Noli 3e003d3b42 add port to the command options 2022-03-25 22:18:28 +01:00
Gnome Ann 0348970b19 Make sure AI is not busy when using retry to regenerate random story 2022-03-23 22:09:35 -04:00
Gnome Ann 4832dd6f37 Allow regenerating random story using Retry button
Commit b55e5a8e0b removed this feature, so
this commit adds it back.
2022-03-23 13:39:46 -04:00
henk717 cf99f02ca5 Merge branch 'main' into united 2022-03-20 19:22:53 +01:00
henk717 20eab085dd Fix AutoSave Toggle 2022-03-20 19:12:11 +01:00
henk717 5c795609e4 KML Fix 2022-03-20 13:10:56 +01:00
Gnome Ann b1125a6705 Add EOS and padding token to default NeoX badwords 2022-03-19 01:30:02 -04:00
Gnome Ann 85a4959efa Merge branch 'united' into neox 2022-03-18 11:19:03 -04:00
henk717 a3e5e052b3 Newer umamba + slope tweak 2022-03-16 18:34:02 +01:00
Gnome Ann 95c4251db9 Print two newlines before loading HF models 2022-03-15 13:58:53 -04:00
Gnome Ann 9dc48b15f0 Add custom badwords and pad token ID for GPT-NeoX 2022-03-14 23:31:49 -04:00
Gnome Ann 88f247d535 GPT-NeoX-20B support in Colab TPU instances 2022-03-14 23:14:20 -04:00
henk717 4892556059 Model saving for colab mode 2022-03-13 11:22:44 +01:00
Gnome Ann 2b8c46338e Change current working directory to KoboldAI folder 2022-03-13 01:22:11 -05:00
ebolam 8ae0a4a3e7 Online Services Working now (without a way to test as I don't have accounts) 2022-03-12 14:21:11 -05:00
ebolam b55e5a8e0b Retry Bug Fix 2022-03-12 10:32:27 -05:00
ebolam ae854bab3d Fix for retry causing issues for future redo actions 2022-03-11 11:40:55 -05:00
ebolam 772ae2eb80 Added model info to show model load progress in UI 2022-03-11 11:31:41 -05:00
henk717 b02d5e8696 Allows missing model_config again 2022-03-10 19:59:10 +01:00
henk717 172a548fa1 Fallback to generic GPT2 Tokenizer 2022-03-10 19:52:15 +01:00
henk717 9dee9b5c6d Ignore incorrect problems 2022-03-09 12:03:37 +01:00
henk717 a28e553412 Remove unused gettokenids 2022-03-09 11:59:33 +01:00
ebolam 0943926f6a Fix for lazy loading 2022-03-07 19:52:44 -05:00
ebolam bfc07073e3 layer count fix 2022-03-07 19:33:24 -05:00
ebolam d8ab58892d saved layer value fix 2022-03-07 19:21:55 -05:00
ebolam da53d7edb3 Custom Path Load fix 2022-03-07 18:54:11 -05:00
ebolam d1a64e25da Custom Model Load Fix 2022-03-07 18:44:37 -05:00
ebolam 70f1c2da9c Added stub for model name feedback 2022-03-07 14:20:25 -05:00
ebolam d0553779ab Bug Fix 2022-03-07 12:33:35 -05:00
ebolam c50fe77a7d Load Fix 2022-03-07 11:57:33 -05:00
ebolam 49fc854e55 Added saving of breakmodel values so that it defaults to it on next load 2022-03-07 11:49:34 -05:00
ebolam 2cf6b6e650
Merge branch 'henk717:united' into united 2022-03-07 11:31:14 -05:00
ebolam 123cd45b0e Breakmodel working now with the web UI 2022-03-07 11:27:23 -05:00
henk717 7434c9221b Expand OAI Setting Compatibility 2022-03-07 08:56:47 +01:00
ebolam 5e00f7daf0 Next evolution of web ui model selection. Custom Paths not working quite right. 2022-03-06 20:55:11 -05:00
ebolam 2ddf45141b Initial UI based model loading. Includes all parameters except breakmodel chunks, engine # for OAI, and url for ngrok url for google colab 2022-03-06 19:51:35 -05:00
ebolam f6c95f18fa
Fix for Redo (#94)
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)

* Removed debug code
2022-03-06 23:18:14 +01:00
henk717 f857696224 OAI ConfigName Bugfix 2022-03-06 20:18:42 +01:00
henk717 3ddc9647eb Basic GooseAI Support 2022-03-06 20:10:30 +01:00
henk717 daea4b8d15 Fix Breakmodel RAM Regression 2022-03-06 08:26:50 +01:00
henk717 105d3831b5 Lazy Load Float32 for CPU 2022-03-06 07:56:04 +01:00
Gnome Ann 373f7b9bd5 Don't convert tensors to float16 if using CPU-only mode 2022-03-05 14:30:26 -05:00
Gnome Ann 579e85820c Resolve merge conflict 2022-03-05 14:13:56 -05:00
Gnome Ann 2e19ea1bb6 Auto detect if we're in a Colab TPU instance 2022-03-05 14:07:23 -05:00
ebolam 4a8d7f5e0b
Merge branch 'henk717:united' into united 2022-03-05 13:25:10 -05:00
Gnome Ann 0a258a6282 Support for loading HF models on TPU with `--colab_tpu` 2022-03-05 12:33:33 -05:00
Gnome Ann 86ac562b0c Lazy loader should convert model tensors to float16 before moving them 2022-03-05 11:31:34 -05:00
ebolam 4dd119c38d Redo no longer goes through formatting function (thereby getting changed) 2022-03-05 11:15:33 -05:00
ebolam 353817b4da Remove debug print statements 2022-03-05 10:35:06 -05:00
ebolam 221f264fa7 Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty. 2022-03-05 10:31:28 -05:00
Gnome Ann a00dede610 Put the XGLM embedding patch behind a version check 2022-03-04 19:10:15 -05:00
Gnome Ann 5674516f0c Merge branch 'united' into lazy-loader 2022-03-04 18:27:51 -05:00
ebolam 5f92cbc231 Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-03-04 15:37:34 -05:00
ebolam 321f45ccad Fix debug to never crash (would on some initialization steps) 2022-03-04 15:36:13 -05:00
ebolam ee883fc4da
Merge branch 'henk717:united' into united 2022-03-04 14:15:16 -05:00
ebolam 26b9268391 Redo bug fix 2022-03-04 14:14:44 -05:00
henk717 eb247d69c3
Merge branch 'KoboldAI:main' into united 2022-03-04 18:24:56 +01:00
Gnome Ann a1fedca2c8 Use lazy loading automatically if a config file exists for the model 2022-03-04 11:11:33 -05:00
MrReplikant ae143e896c
Fixed unnecessary spacing in chatmode
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
Gnome Ann f0629958b1 Merge branch 'united' into lazy-loader 2022-03-04 00:37:25 -05:00
Gnome Ann 58a2c18821 Add lazy torch loading support to transformers backend 2022-03-04 00:33:10 -05:00
henk717 e033b04f87 Restore United 2022-03-02 11:40:50 +01:00
henk717 f9ac23ba4e Add Janeway and Shinen 2022-03-02 09:51:25 +01:00
ebolam 3f73f84b69 bug fix 2022-02-28 19:04:12 -05:00
ebolam 6003b2369b Debug and load story fix for actions_metadata variable 2022-02-28 10:39:36 -05:00
ebolam 47d102635e
Merge branch 'united' into united 2022-02-28 08:37:45 -05:00
ebolam 7803fbb137 Fixed error in redo action when editing previous entries and/or editing right after a redo 2022-02-28 08:31:26 -05:00
henk717 13fe472264 Menu Polish 2022-02-28 02:47:15 +01:00
henk717 f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
henk717 4849a30d88
Merge pull request #84 from mrseeker/patch-3
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
henk717 a466e13c00 Model List Support 2022-02-26 12:34:07 +01:00
Gnome Ann a22d59e191 Fix a bug with soft prompts when using transformers XGLM 2022-02-25 12:35:23 -05:00
Julius ter Pelkwijk 0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
Gnome Ann 072ca87977 Load soft prompt at the end instead of inside `loadsettings()` 2022-02-23 21:15:08 -05:00
Gnome Ann 8120e4dfa2 Need to set `vars.allowsp` to True before calling `loadsettings()` 2022-02-23 21:09:31 -05:00
Gnome Ann c45ba497c9 Load settings earlier to avoid TPU badwords issues 2022-02-23 20:39:11 -05:00
henk717 ac59e55d62 Smaller optimizations 2022-02-24 01:14:26 +01:00
henk717 8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann ad10ac8871 Allow TPU models to specify settings/config in config.json 2022-02-23 18:22:18 -05:00
henk717 7de3311000 Fix sentencepiece model saving 2022-02-23 22:04:41 +01:00
henk717 fd7ba9f70e Also check for Config in models/ 2022-02-22 19:22:08 +01:00
henk717 4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
henk717 300db651de Open models folder by default 2022-02-21 00:46:18 +01:00
Gnome Ann da10e2dc1d Don't crash if `XGLMSinusoidalPositionalEmbedding` doesn't exist 2022-02-20 17:41:00 -05:00
Gnome Ann 5dc4969173 Temporary fix for XGLM positional embedding issues 2022-02-20 14:17:24 -05:00
Gnome Ann a63fa3b067 Prevent transformers XGLM from stopping generation on `</s>` token 2022-02-19 23:15:16 -05:00
henk717 a47e93cee7 Seperate Low Memory Mode
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00