henk717
8098f4ec8f
Merge branch 'KoboldAI:main' into united
2022-06-23 17:20:48 +02:00
henk717
1d41966d88
Merge pull request #129 from VE-FORBRYDERNE/budget
...
Account for lnheader in budget calculation
2022-06-23 12:39:20 +02:00
vfbd
0eb9f8a879
Account for lnheader in budget calculation
2022-06-22 19:16:24 -04:00
henk717
3de22f2b27
Merge pull request #160 from VE-FORBRYDERNE/gc
...
Delete all torch tensors before loading model
2022-06-22 18:38:21 +02:00
vfbd
53034ee533
Delete all torch tensors before loading model
2022-06-22 12:07:36 -04:00
henk717
f127918114
Merge pull request #159 from VE-FORBRYDERNE/fairseq
...
Don't blacklist </s> token in "s" newline mode
2022-06-22 17:41:20 +02:00
vfbd
922394c68f
Don't blacklist </s> token in "s" newline mode
2022-06-22 11:23:03 -04:00
Henk
d4e18360f0
HF NeoX Support
2022-06-22 01:46:40 +02:00
henk717
5f9a116052
Merge pull request #128 from VE-FORBRYDERNE/neox
...
TPU support for HF GPT-NeoX model
2022-06-22 01:39:53 +02:00
Gnome Ann
8c594c6869
Correct the padding token for GPT-NeoX
2022-06-21 19:37:43 -04:00
Gnome Ann
a7f667c34c
Use NeoX badwords when loading from HF GPT-NeoX model
2022-06-21 19:33:25 -04:00
Gnome Ann
5e3c7c07ae
Merge branch 'main' into neox
2022-06-21 19:30:51 -04:00
henk717
f1d0a327f8
Merge branch 'KoboldAI:main' into united
2022-06-21 23:34:32 +02:00
Henk
75bc472a9f
Transformers bump to 4.20.1
...
Transformers issued an important change for the OPT models breaking their compatibility with all older versions. In order for people to be able to use all models on the menu they need 4.20.1 so this is now forced in the dependencies making the update easier.
2022-06-21 23:33:38 +02:00
henk717
b5b8e5a30b
Merge branch 'KoboldAI:main' into united
2022-06-21 23:19:57 +02:00
henk717
2be1f5088f
Merge pull request #126 from VE-FORBRYDERNE/opt
...
Update OPT models and fix 20B model on TPU
2022-06-21 23:19:03 +02:00
Gnome Ann
33a2a318db
Fix 20B TPU model
2022-06-21 17:16:01 -04:00
Gnome Ann
a7e3ef71aa
Add final layer norm to OPT
2022-06-21 16:36:26 -04:00
henk717
37eb47d0d3
Merge pull request #157 from VE-FORBRYDERNE/sp-fix
...
Bug fixes and new soft prompt implementation
2022-06-21 22:20:36 +02:00
Gnome Ann
8593bf339b
Another typo fix
2022-06-21 15:36:25 -04:00
Gnome Ann
7e0ded6b47
Typo fix
2022-06-21 15:12:55 -04:00
Gnome Ann
91643be10a
Change soft prompt implementation to a more universal one
2022-06-21 15:03:43 -04:00
Gnome Ann
0ea4fa9c87
Automatically calculate badwords and pad_token_id
2022-06-21 14:35:52 -04:00
Gnome Ann
ea7d278ff4
Fix 20B TPU model
2022-06-21 13:16:45 -04:00
Gnome Ann
6b172306f6
move_model_to_devices no longer crashes if you don't have accelerate
2022-06-21 13:15:46 -04:00
henk717
f2c5bb5cb7
Merge pull request #156 from VE-FORBRYDERNE/accelerate
...
Accelerate disk cache support
2022-06-21 00:31:50 +02:00
Gnome Ann
ff69e9fbfe
Put layers_module_names, module_names and named_buffers in utils.py
2022-06-20 17:17:42 -04:00
Gnome Ann
1620ac4148
Lazy loader needs to cache named buffers of layers in the disk cache
2022-06-20 17:08:52 -04:00
Gnome Ann
ab5ab79003
Set primary device to CPU if in CPU-only mode
2022-06-20 16:25:01 -04:00
Gnome Ann
bd7d7b41a1
Don't enable accelerate if no layers are in disk cache or GPUs
2022-06-20 16:21:44 -04:00
Gnome Ann
90fd8b1845
Disk cache support in CPU-only mode
2022-06-20 16:06:09 -04:00
Gnome Ann
af07d7a15f
Disk cache support for computers with at least one GPU
2022-06-20 14:49:54 -04:00
Gnome Ann
47a58a36b8
Add disk cache slider
2022-06-19 22:53:30 -04:00
henk717
efed44ac8d
Merge pull request #155 from VE-FORBRYDERNE/accelerate
...
Initial support for Accelerate
2022-06-20 01:08:54 +02:00
Gnome Ann
4dd59e0a9d
Correct the type hint for lazy_load_callback
2022-06-19 17:17:41 -04:00
Gnome Ann
21de36c4b0
Lazy loader now moves all non-layer weights to primary device
2022-06-19 16:44:23 -04:00
Gnome Ann
26c319519e
Lazy loader now attempts to pin layers if accelerate is enabled
2022-06-19 16:35:23 -04:00
Gnome Ann
042cf3e560
Automatically support soft prompts for all transformers models
2022-06-19 13:11:58 -04:00
Gnome Ann
cc56718a7e
Fix lazy loader putting too many layers on CPU
2022-06-19 00:29:35 -04:00
Gnome Ann
1380eb0bb0
Disable lazy loader when using GPT-2
2022-06-18 23:54:11 -04:00
Gnome Ann
f9732eb143
Always enable breakmodel if accelerate is available
2022-06-18 23:46:09 -04:00
Gnome Ann
8b4efc5d0a
Use `accelerate.dispatch_model()` instead of breakmodel if possible
2022-06-18 23:41:36 -04:00
Gnome Ann
f7ffdd7b6b
Add more model querying utilities
2022-06-18 18:16:56 -04:00
Gnome Ann
e143963161
Merge branch 'united' into accelerate
2022-06-18 13:47:38 -04:00
henk717
b209cf9868
NS mode as default
...
Experimental change that makes NS the default, more and more models seem to be requiring this as megatron based models are getting traction, neither does this seem to break the original models (with the exception of a user not being able to use </s> in generated outputs, the extremely rare case someone would be effected by this they can manually switch the mode by editing their settings file).
If this breaks nothing ns will remain the default, however the n mode should remain a choice for those who need it. In case it does get reversed I have also added the bloom model type to the ns list since its models require this.
2022-06-18 19:46:16 +02:00
henk717
23aae24f8e
Merge pull request #154 from VE-FORBRYDERNE/united-merge
...
Merge main into united
2022-06-18 19:42:26 +02:00
Gnome Ann
0eedc541c8
Merge branch 'main' into united-merge
2022-06-18 13:39:23 -04:00
henk717
a10446f258
Merge pull request #123 from VE-FORBRYDERNE/tokenizer
...
Fix OPT tokenization problems
2022-06-18 11:38:14 +02:00
Gnome Ann
5e71f7fe97
Use slow tokenizer if fast tokenizer is not available
2022-06-17 21:08:37 -04:00
Gnome Ann
f71bae254a
Fix OPT tokenization problems
2022-06-17 13:29:42 -04:00