Commit Graph

589 Commits

Author SHA1 Message Date
df76bc4b41 Fix for Colab 2022-06-06 21:29:14 -04:00
edbf36a632 Web UI functional for GooseAI (and presumably OpenAI).
Fix for Breakmodel layer info saving
2022-06-06 19:21:10 -04:00
d9480ec439 Fix for lazy loading 2022-06-06 14:27:47 -04:00
60b70bdf8a Fix 2022-06-06 14:02:17 -04:00
dd07b10b73 Fix for model loading on web ui and removing AI menu when using remote/colab methods 2022-06-06 13:57:19 -04:00
c984f4412d Fix for web based model loading 2022-06-06 12:49:40 -04:00
1e139594a9 Merge commit 'refs/pull/7/head' of https://github.com/ebolam/KoboldAI into HEAD 2022-06-06 09:49:46 -04:00
e5dcf91a08 Defaults Support
This adds support for loading settings from the defaults folder, settings are loaded in the following order and overwritten if needed by the higher number.

1. The model config file.
2. The defaults folder.
3. The users defined settings file.

With this support we can begin to ship better defaults for models we do not manage. Our community tuners have been most helpful at adding good defaults to their configuration files, but for other models such as the base models this gives us the flexibility to define better settings for each model without messing with a users desired settings if they already exist.
2022-06-01 10:34:16 +02:00
707316de31 Kaggle TPU support 2022-05-31 12:20:16 -04:00
1a1f2f6428 30B ram requirements 2022-05-31 13:17:06 +02:00
69da5b7bc2 Update list of transformers versions that have broken OPT 2022-05-28 23:44:19 -04:00
4b65ce9c76 1.18 version bump 2022-05-28 19:39:05 +02:00
b30370bf4b 2048 maxtoken default
Almost everyone prefers 2048 max tokens because of the superior coherency. It should only be lower due to ram limits, but the menu already shows the optimal ram for 2048. Negatively effected users can turn it down themselves, for everyone else especially on rented machines or colab 2048 is a better default.
2022-05-27 01:23:48 +02:00
c692987e40 Fix an error that occurs when loading GPT-2 models
I forgot that this new_rebuild_tensor function's first argument's type
is different when loading GPT-2 models.
2022-05-20 14:54:49 -04:00
6ae7b48b69 Adding Nerys model 13B 2022-05-18 13:50:57 +02:00
f0df3de610 Adding Nerys model 2.7B 2022-05-16 09:50:45 +02:00
d5ab3ef5b1 Fix no attribute get_checkpoint_shard_files 2022-05-14 11:49:04 -04:00
1476e76cfc Copy fp16 model files instead of resaving them 2022-05-14 00:45:43 -04:00
0c5ca5261e Loading a sharded model will now display only one progress bar 2022-05-13 23:32:16 -04:00
f9f1a5f3a9 Make sure tqdm progress bars display properly in Colab 2022-05-13 17:37:45 -04:00
91d3672446 Proper progress bar for aria2 downloads 2022-05-13 17:00:10 -04:00
7ea0c49c1a Merge pull request #128 from VE-FORBRYDERNE/opt
OPT breakmodel and TPU support
2022-05-13 18:07:02 +02:00
1200173386 Custom badwords for OPT
Generated using:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-350m", fast=False)
badwordsids_opt = [[v] for k, v in tokenizer.vocab.items() if any(c in k for c in "<>[]")]
```
2022-05-13 10:45:28 -04:00
d5fa782483 NS Mode (comment fix) 2022-05-13 10:53:19 +02:00
8376f12e21 Add NS mode
OPT supports newlines, but it also needs some of the behavior we use in S mode. NS mode is a more limited version of S mode that still handles the </s> token, but instead of replacing it with a new line we replace it empty and newlines are not converted.

In future if your Fairseq style model has newline support use NS mode, while if it needs artifically inserted newlines use S mode. This also means that people finetuning fairseq models to include newlines might benefit from testing their models on ns mode.
2022-05-13 10:44:12 +02:00
55079f672a Fix typo in soft prompt patching code 2022-05-13 01:51:55 -04:00
29bb3f569b Fix a bug in OPTForCausalLM where self.lm_head is the wrong size 2022-05-13 01:37:17 -04:00
defbb53b68 OPT breakmodel 2022-05-13 01:03:38 -04:00
b1d8797a54 Allow TPU Colab to load sharded HF models 2022-05-12 23:51:40 -04:00
4fa5f1cd6a Add TPU support for OPT-350M
The 350M model seems to have a different structure than the other ones ???
2022-05-12 22:21:15 -04:00
5c4a087970 Disable S mode for OPT 2022-05-13 01:47:59 +02:00
e98cc3cb16 OPT models 2022-05-12 23:55:21 +02:00
376e76f5da S mode for OPT 2022-05-12 02:18:14 +02:00
46cfa1367f Add --no_aria2 command line flag 2022-05-11 00:44:56 -04:00
f09959f9be Fix patching code of PreTrainedModel.from_pretrained() 2022-05-11 00:41:53 -04:00
4b49d1c464 Make sure vars.revision is defined 2022-05-10 22:51:36 -04:00
b97b2a02d6 Add --revision command line flag 2022-05-10 22:14:56 -04:00
937d9ee06a Change default model.save_pretrained shard size to 500 MiB 2022-05-10 22:04:25 -04:00
a388c63023 Use aria2 to download split checkpoints 2022-05-10 21:28:13 -04:00
9c83ef7fa9 Replaced Adventure 125M and added C1-1.3B 2022-04-28 22:35:04 +00:00
ea82867e4d Merge branch 'united' into settings 2022-04-26 13:58:01 -04:00
11280a6e66 LocalTunnel Linux Fix 2022-04-19 14:41:21 +02:00
b8e79afe5e LocalTunnel support 2022-04-19 13:47:44 +02:00
c7b03398f6 Merge 'nolialsea/patch-1' into settings without Colab changes 2022-04-17 12:15:36 -04:00
372eb4c981 Merge pull request #119 from VE-FORBRYDERNE/scripting-sp
Allow userscripts to change the soft prompt
2022-04-14 21:33:20 +02:00
78d6ee491d Merge pull request #117 from mrseeker/patch-7
Shinen FSD 13B (NSFW)
2022-04-14 21:33:08 +02:00
e180db88aa Merge pull request #118 from VE-FORBRYDERNE/lazy-loader
Fix lazy loader in aiserver.py
2022-04-14 21:33:00 +02:00
bd6f7798b9 Fix lazy loader in aiserver.py 2022-04-14 14:33:10 -04:00
ad94f6c01c Shinen FSD 13B (NSFW) 2022-04-14 08:23:50 +02:00
945c34e822 Shinen FSD 6.7B (NSFW) 2022-04-13 14:47:22 +02:00