ebolam
cc3ccb7f36
Text Token Length Fix
2022-07-04 15:05:20 -04:00
ebolam
b79ec8b1c5
Fix
2022-07-01 19:54:33 -04:00
ebolam
63f44f8204
Fix for select option
2022-07-01 19:40:34 -04:00
ebolam
516564ef6c
Initial Load Story dialog
2022-07-01 19:24:20 -04:00
ebolam
0161966cea
Env Fix
2022-07-01 17:24:06 -04:00
ebolam
6e841b87eb
Fix for env
2022-07-01 16:53:51 -04:00
ebolam
73ad11c6d7
Fix for env variables
2022-07-01 16:49:47 -04:00
ebolam
ca07fdbe44
Added ability to use env variables instead of argparse (command argument = docker env variable)
2022-07-01 16:09:13 -04:00
ebolam
40bcf893d5
Preset Updates
2022-07-01 14:54:40 -04:00
ebolam
a56ef086e4
Estimated chunks going to generate
2022-07-01 11:27:43 -04:00
ebolam
9170aa7a4e
Model Loading functional
...
Fix for mobile display
2022-07-01 08:09:10 -04:00
ebolam
16c5c580db
Checkin
2022-06-30 13:40:47 -04:00
ebolam
ce1bff1b84
TPU fixes
2022-06-29 17:56:25 -04:00
ebolam
72827ed149
Colab fix and send_to_ui fix
2022-06-29 17:44:22 -04:00
ebolam
de73aa2364
Single vars working with disabled framework for multi-story multi-user environment (LUA breaks)
2022-06-29 14:15:06 -04:00
ebolam
0ffaa1bfcf
Presets and Remaining time updates
2022-06-27 18:36:22 -04:00
ebolam
057f3dd92d
back, redo, retry functional
2022-06-26 21:06:06 -04:00
ebolam
b906742f61
Working options.
2022-06-26 16:36:07 -04:00
ebolam
4c357abd78
metadata merged with actions
2022-06-24 09:22:59 -04:00
ebolam
b0ac4581de
UI v2 Initial Commit
2022-06-22 18:39:09 -04:00
ebolam
86553d329c
Merge United
2022-06-22 14:32:58 -04:00
ebolam
83c0b9ee1e
Vars Migration Fix for back/redo
...
Fix for pytest for back/redo and model loading with disk caching
2022-06-22 14:13:44 -04:00
ebolam
13fcf462e9
Moved VARS to koboldai_settings and broken into model, story, user, system variables. story class also re-written to include options (actions_metadata). actions_metadata will be removed in UI2.
2022-06-22 11:14:37 -04:00
8593bf339b
Another typo fix
2022-06-21 15:36:25 -04:00
7e0ded6b47
Typo fix
2022-06-21 15:12:55 -04:00
91643be10a
Change soft prompt implementation to a more universal one
2022-06-21 15:03:43 -04:00
0ea4fa9c87
Automatically calculate badwords and pad_token_id
2022-06-21 14:35:52 -04:00
6b172306f6
move_model_to_devices no longer crashes if you don't have accelerate
2022-06-21 13:15:46 -04:00
ff69e9fbfe
Put layers_module_names, module_names and named_buffers in utils.py
2022-06-20 17:17:42 -04:00
1620ac4148
Lazy loader needs to cache named buffers of layers in the disk cache
2022-06-20 17:08:52 -04:00
ab5ab79003
Set primary device to CPU if in CPU-only mode
2022-06-20 16:25:01 -04:00
bd7d7b41a1
Don't enable accelerate if no layers are in disk cache or GPUs
2022-06-20 16:21:44 -04:00
90fd8b1845
Disk cache support in CPU-only mode
2022-06-20 16:06:09 -04:00
af07d7a15f
Disk cache support for computers with at least one GPU
2022-06-20 14:49:54 -04:00
47a58a36b8
Add disk cache slider
2022-06-19 22:53:30 -04:00
4dd59e0a9d
Correct the type hint for lazy_load_callback
2022-06-19 17:17:41 -04:00
21de36c4b0
Lazy loader now moves all non-layer weights to primary device
2022-06-19 16:44:23 -04:00
26c319519e
Lazy loader now attempts to pin layers if accelerate is enabled
2022-06-19 16:35:23 -04:00
042cf3e560
Automatically support soft prompts for all transformers models
2022-06-19 13:11:58 -04:00
cc56718a7e
Fix lazy loader putting too many layers on CPU
2022-06-19 00:29:35 -04:00
1380eb0bb0
Disable lazy loader when using GPT-2
2022-06-18 23:54:11 -04:00
f9732eb143
Always enable breakmodel if accelerate is available
2022-06-18 23:46:09 -04:00
8b4efc5d0a
Use accelerate.dispatch_model()
instead of breakmodel if possible
2022-06-18 23:41:36 -04:00
f7ffdd7b6b
Add more model querying utilities
2022-06-18 18:16:56 -04:00
e143963161
Merge branch 'united' into accelerate
2022-06-18 13:47:38 -04:00
henk717
b209cf9868
NS mode as default
...
Experimental change that makes NS the default, more and more models seem to be requiring this as megatron based models are getting traction, neither does this seem to break the original models (with the exception of a user not being able to use </s> in generated outputs, the extremely rare case someone would be effected by this they can manually switch the mode by editing their settings file).
If this breaks nothing ns will remain the default, however the n mode should remain a choice for those who need it. In case it does get reversed I have also added the bloom model type to the ns list since its models require this.
2022-06-18 19:46:16 +02:00
0eedc541c8
Merge branch 'main' into united-merge
2022-06-18 13:39:23 -04:00
5e71f7fe97
Use slow tokenizer if fast tokenizer is not available
2022-06-17 21:08:37 -04:00
f71bae254a
Fix OPT tokenization problems
2022-06-17 13:29:42 -04:00
ebolam
2964175d8b
Fix for flaskwebgui
2022-06-17 08:17:22 -04:00