Commit Graph

766 Commits

Author SHA1 Message Date
5c81374a48 Top A for GooseAi 2022-06-11 22:04:37 +02:00
fdb2a7fa4c Top-A sampling 2022-06-10 22:28:20 -04:00
cfd1147d5a Bug fix for loading model after loading a model duplicating the settings menu until the website is refreshed
Fixed escaping warnings
Added back/redo unit test
2022-06-10 14:47:52 -04:00
ed428f2e73 Merge branch 'Web-UI' of https://github.com/ebolam/KoboldAI into Web-UI 2022-06-10 09:12:18 -04:00
4a920724d9 fix for folder paths on linux 2022-06-10 09:12:04 -04:00
6200908582 Merge pull request #10 from henk717/overhaul
Overhaul
2022-06-10 08:40:15 -04:00
13f17d3eca Changed unit tests so that they run with a simple pytest command 2022-06-10 08:39:15 -04:00
ce582f188f Merge branch 'united' into overhaul-merge 2022-06-09 23:48:28 -04:00
fe619d4677 Update list of versions with broken OPT again
They released another version of transformers that still doesn't have
the OPT patch so I decided it would be safer to just mark all 4.19
transformers versions as needing the OPT patch.
2022-06-09 17:42:46 -04:00
663dee784d Unit Tests using pytest and Minor modifications to allow unit testing 2022-06-09 13:16:32 -04:00
606c276f9d Potential fix for tokenizer using a fallback 2022-06-09 09:01:40 -04:00
db9a94ca2a Added GPU name to the UI when using break models.
Added total layers to the UI
Added favicon
2022-06-09 08:42:35 -04:00
c565978fff Fix for multi-gpu not showing appropriately
Slight visual improvement for custom model load breadcrumbs
2022-06-08 19:39:04 -04:00
4548dcf1b0 Fix for --model with custom paths 2022-06-08 18:53:56 -04:00
001439be45 Merge pull request #9 from henk717/overhaul
Overhaul
2022-06-08 18:44:21 -04:00
622a3fc8db Fix for model loading by moving monkey patching functions into a run-once function
Added folder navigation to custom model loading (Needs prittying)
2022-06-08 18:42:44 -04:00
1a46d97ad5 Send correct settings after load 2022-06-08 13:26:30 +02:00
461cd04932 Fix Essential Code + selectfolder fix
As part of the restructuring essential code was removed that handled the --path parameter correctly. This has now been restored. Selectfolder was also updated to use its NeoCustom counterpart instead of specifying a model so that the underlying code that corrects model names is being hit again.
2022-06-08 11:30:00 +02:00
190869f0d3 Fix for selectfolder model to force old style folder select on startup. 2022-06-07 20:24:31 -04:00
88f5ed7b3c --model selectfolder 2022-06-07 21:32:58 +02:00
66ba165b4c --noaimenu as seperate parameter 2022-06-07 20:44:14 +02:00
6fd2496d94 Fix for green opening text showing OAI and/or OAI/GooseAI model name rather than the appropriate name. 2022-06-07 13:47:10 -04:00
1df88e1696 TPU fix Attempt 2022-06-07 09:05:51 -04:00
bf4af94abb Hopefully a fix for InferKit 2022-06-07 08:22:10 -04:00
afb894f5a0 TPU Fix 2022-06-06 21:47:15 -04:00
1b35b55d86 Fix TPU 2022-06-06 21:39:17 -04:00
ae1aed0916 TPU Fix 2022-06-06 21:37:35 -04:00
df76bc4b41 Fix for Colab 2022-06-06 21:29:14 -04:00
edbf36a632 Web UI functional for GooseAI (and presumably OpenAI).
Fix for Breakmodel layer info saving
2022-06-06 19:21:10 -04:00
d9480ec439 Fix for lazy loading 2022-06-06 14:27:47 -04:00
60b70bdf8a Fix 2022-06-06 14:02:17 -04:00
dd07b10b73 Fix for model loading on web ui and removing AI menu when using remote/colab methods 2022-06-06 13:57:19 -04:00
c984f4412d Fix for web based model loading 2022-06-06 12:49:40 -04:00
1e139594a9 Merge commit 'refs/pull/7/head' of https://github.com/ebolam/KoboldAI into HEAD 2022-06-06 09:49:46 -04:00
e5dcf91a08 Defaults Support
This adds support for loading settings from the defaults folder, settings are loaded in the following order and overwritten if needed by the higher number.

1. The model config file.
2. The defaults folder.
3. The users defined settings file.

With this support we can begin to ship better defaults for models we do not manage. Our community tuners have been most helpful at adding good defaults to their configuration files, but for other models such as the base models this gives us the flexibility to define better settings for each model without messing with a users desired settings if they already exist.
2022-06-01 10:34:16 +02:00
707316de31 Kaggle TPU support 2022-05-31 12:20:16 -04:00
1a1f2f6428 30B ram requirements 2022-05-31 13:17:06 +02:00
69da5b7bc2 Update list of transformers versions that have broken OPT 2022-05-28 23:44:19 -04:00
4b65ce9c76 1.18 version bump 2022-05-28 19:39:05 +02:00
b30370bf4b 2048 maxtoken default
Almost everyone prefers 2048 max tokens because of the superior coherency. It should only be lower due to ram limits, but the menu already shows the optimal ram for 2048. Negatively effected users can turn it down themselves, for everyone else especially on rented machines or colab 2048 is a better default.
2022-05-27 01:23:48 +02:00
c692987e40 Fix an error that occurs when loading GPT-2 models
I forgot that this new_rebuild_tensor function's first argument's type
is different when loading GPT-2 models.
2022-05-20 14:54:49 -04:00
6ae7b48b69 Adding Nerys model 13B 2022-05-18 13:50:57 +02:00
f0df3de610 Adding Nerys model 2.7B 2022-05-16 09:50:45 +02:00
d5ab3ef5b1 Fix no attribute get_checkpoint_shard_files 2022-05-14 11:49:04 -04:00
1476e76cfc Copy fp16 model files instead of resaving them 2022-05-14 00:45:43 -04:00
0c5ca5261e Loading a sharded model will now display only one progress bar 2022-05-13 23:32:16 -04:00
f9f1a5f3a9 Make sure tqdm progress bars display properly in Colab 2022-05-13 17:37:45 -04:00
91d3672446 Proper progress bar for aria2 downloads 2022-05-13 17:00:10 -04:00
7ea0c49c1a Merge pull request #128 from VE-FORBRYDERNE/opt
OPT breakmodel and TPU support
2022-05-13 18:07:02 +02:00
1200173386 Custom badwords for OPT
Generated using:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-350m", fast=False)
badwordsids_opt = [[v] for k, v in tokenizer.vocab.items() if any(c in k for c in "<>[]")]
```
2022-05-13 10:45:28 -04:00