Commit Graph

685 Commits

Author SHA1 Message Date
vfbd 048bd0ff3b Add support for setting the RNG seed and full determinism 2022-06-28 13:21:05 -04:00
ebolam edd6dd7cd7 Fix for saved breakmodel settings on custom models
Fix for unit tests with new disk breakmodel
2022-06-27 10:12:54 -04:00
Henk 46678931b2 Better sentence spacing 2022-06-26 20:27:21 +02:00
vfbd b99d1449c9 Remove trailing whitespace from submissions 2022-06-26 13:15:55 -04:00
Henk fa97d28cb3 Nerys V2 for United 2022-06-25 14:06:51 +02:00
Henk 9e7eb80db4 Nerys V2 part 2 2022-06-25 14:03:19 +02:00
Henk ecc6ee9474 Nerys V2 2022-06-25 13:47:49 +02:00
henk717 10e85db89d
Merge pull request #162 from VE-FORBRYDERNE/whitespace-cleanup
Story whitespace cleanup
2022-06-25 13:36:03 +02:00
Henk d3fce44095 Merge branch 'main' into united 2022-06-24 18:31:45 +02:00
Henk 8be0964427 AIDG Import Fix 2022-06-24 18:29:06 +02:00
vfbd 4b16600e49 Clean up whitespace at the end of actions when loading story
Specifically, we merge blank actions into the next action and we move
whitespace at the end of non-blank actions to the beginning of the next
action.
2022-06-24 12:03:35 -04:00
vfbd 3da885d408 GPT-NeoX HF model badwords fix 2022-06-23 15:02:43 -04:00
henk717 8098f4ec8f
Merge branch 'KoboldAI:main' into united 2022-06-23 17:20:48 +02:00
vfbd 0eb9f8a879 Account for lnheader in budget calculation 2022-06-22 19:16:24 -04:00
vfbd 53034ee533 Delete all torch tensors before loading model 2022-06-22 12:07:36 -04:00
vfbd 922394c68f Don't blacklist </s> token in "s" newline mode 2022-06-22 11:23:03 -04:00
Gnome Ann 8c594c6869 Correct the padding token for GPT-NeoX 2022-06-21 19:37:43 -04:00
Gnome Ann a7f667c34c Use NeoX badwords when loading from HF GPT-NeoX model 2022-06-21 19:33:25 -04:00
Gnome Ann 8593bf339b Another typo fix 2022-06-21 15:36:25 -04:00
Gnome Ann 7e0ded6b47 Typo fix 2022-06-21 15:12:55 -04:00
Gnome Ann 91643be10a Change soft prompt implementation to a more universal one 2022-06-21 15:03:43 -04:00
Gnome Ann 0ea4fa9c87 Automatically calculate badwords and pad_token_id 2022-06-21 14:35:52 -04:00
Gnome Ann 6b172306f6 move_model_to_devices no longer crashes if you don't have accelerate 2022-06-21 13:15:46 -04:00
Gnome Ann ff69e9fbfe Put layers_module_names, module_names and named_buffers in utils.py 2022-06-20 17:17:42 -04:00
Gnome Ann 1620ac4148 Lazy loader needs to cache named buffers of layers in the disk cache 2022-06-20 17:08:52 -04:00
Gnome Ann ab5ab79003 Set primary device to CPU if in CPU-only mode 2022-06-20 16:25:01 -04:00
Gnome Ann bd7d7b41a1 Don't enable accelerate if no layers are in disk cache or GPUs 2022-06-20 16:21:44 -04:00
Gnome Ann 90fd8b1845 Disk cache support in CPU-only mode 2022-06-20 16:06:09 -04:00
Gnome Ann af07d7a15f Disk cache support for computers with at least one GPU 2022-06-20 14:49:54 -04:00
Gnome Ann 47a58a36b8 Add disk cache slider 2022-06-19 22:53:30 -04:00
Gnome Ann 4dd59e0a9d Correct the type hint for lazy_load_callback 2022-06-19 17:17:41 -04:00
Gnome Ann 21de36c4b0 Lazy loader now moves all non-layer weights to primary device 2022-06-19 16:44:23 -04:00
Gnome Ann 26c319519e Lazy loader now attempts to pin layers if accelerate is enabled 2022-06-19 16:35:23 -04:00
Gnome Ann 042cf3e560 Automatically support soft prompts for all transformers models 2022-06-19 13:11:58 -04:00
Gnome Ann cc56718a7e Fix lazy loader putting too many layers on CPU 2022-06-19 00:29:35 -04:00
Gnome Ann 1380eb0bb0 Disable lazy loader when using GPT-2 2022-06-18 23:54:11 -04:00
Gnome Ann f9732eb143 Always enable breakmodel if accelerate is available 2022-06-18 23:46:09 -04:00
Gnome Ann 8b4efc5d0a Use `accelerate.dispatch_model()` instead of breakmodel if possible 2022-06-18 23:41:36 -04:00
Gnome Ann f7ffdd7b6b Add more model querying utilities 2022-06-18 18:16:56 -04:00
Gnome Ann e143963161 Merge branch 'united' into accelerate 2022-06-18 13:47:38 -04:00
henk717 b209cf9868
NS mode as default
Experimental change that makes NS the default, more and more models seem to be requiring this as megatron based models are getting traction, neither does this seem to break the original models (with the exception of a user not being able to use </s> in generated outputs, the extremely rare case someone would be effected by this they can manually switch the mode by editing their settings file).

If this breaks nothing ns will remain the default, however the n mode should remain a choice for those who need it. In case it does get reversed I have also added the bloom model type to the ns list since its models require this.
2022-06-18 19:46:16 +02:00
Gnome Ann 0eedc541c8 Merge branch 'main' into united-merge 2022-06-18 13:39:23 -04:00
Gnome Ann 5e71f7fe97 Use slow tokenizer if fast tokenizer is not available 2022-06-17 21:08:37 -04:00
Gnome Ann f71bae254a Fix OPT tokenization problems 2022-06-17 13:29:42 -04:00
ebolam 2964175d8b Fix for flaskwebgui 2022-06-17 08:17:22 -04:00
Henk f112fc3493 Initial flaskwebgui support 2022-06-17 13:49:03 +02:00
Gnome Ann 8bdf17f598 Lazy loader can now use accelerate's `init_empty_weights()` 2022-06-16 18:56:16 -04:00
Gnome Ann 5253cdcb36 Lazy loader no longer requires map file except when loading to TPU 2022-06-16 18:45:11 -04:00
Gnome Ann 96d3d397ab Don't use fallback loading if we run out of memory during loading 2022-06-15 14:35:32 -04:00
Henk fb2b6f1026 Model Path Hardening 2022-06-15 13:29:10 +02:00