f857696224
OAI ConfigName Bugfix
2022-03-06 20:18:42 +01:00
3ddc9647eb
Basic GooseAI Support
2022-03-06 20:10:30 +01:00
daea4b8d15
Fix Breakmodel RAM Regression
2022-03-06 08:26:50 +01:00
105d3831b5
Lazy Load Float32 for CPU
2022-03-06 07:56:04 +01:00
373f7b9bd5
Don't convert tensors to float16 if using CPU-only mode
2022-03-05 14:30:26 -05:00
579e85820c
Resolve merge conflict
2022-03-05 14:13:56 -05:00
2e19ea1bb6
Auto detect if we're in a Colab TPU instance
2022-03-05 14:07:23 -05:00
4a8d7f5e0b
Merge branch 'henk717:united' into united
2022-03-05 13:25:10 -05:00
0a258a6282
Support for loading HF models on TPU with --colab_tpu
2022-03-05 12:33:33 -05:00
86ac562b0c
Lazy loader should convert model tensors to float16 before moving them
2022-03-05 11:31:34 -05:00
4dd119c38d
Redo no longer goes through formatting function (thereby getting changed)
2022-03-05 11:15:33 -05:00
353817b4da
Remove debug print statements
2022-03-05 10:35:06 -05:00
221f264fa7
Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty.
2022-03-05 10:31:28 -05:00
a00dede610
Put the XGLM embedding patch behind a version check
2022-03-04 19:10:15 -05:00
5674516f0c
Merge branch 'united' into lazy-loader
2022-03-04 18:27:51 -05:00
5f92cbc231
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-03-04 15:37:34 -05:00
321f45ccad
Fix debug to never crash (would on some initialization steps)
2022-03-04 15:36:13 -05:00
ee883fc4da
Merge branch 'henk717:united' into united
2022-03-04 14:15:16 -05:00
26b9268391
Redo bug fix
2022-03-04 14:14:44 -05:00
eb247d69c3
Merge branch 'KoboldAI:main' into united
2022-03-04 18:24:56 +01:00
a1fedca2c8
Use lazy loading automatically if a config file exists for the model
2022-03-04 11:11:33 -05:00
ae143e896c
Fixed unnecessary spacing in chatmode
...
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
f0629958b1
Merge branch 'united' into lazy-loader
2022-03-04 00:37:25 -05:00
58a2c18821
Add lazy torch loading support to transformers backend
2022-03-04 00:33:10 -05:00
e033b04f87
Restore United
2022-03-02 11:40:50 +01:00
f9ac23ba4e
Add Janeway and Shinen
2022-03-02 09:51:25 +01:00
3f73f84b69
bug fix
2022-02-28 19:04:12 -05:00
6003b2369b
Debug and load story fix for actions_metadata variable
2022-02-28 10:39:36 -05:00
47d102635e
Merge branch 'united' into united
2022-02-28 08:37:45 -05:00
7803fbb137
Fixed error in redo action when editing previous entries and/or editing right after a redo
2022-02-28 08:31:26 -05:00
13fe472264
Menu Polish
2022-02-28 02:47:15 +01:00
f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
...
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
4849a30d88
Merge pull request #84 from mrseeker/patch-3
...
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
a466e13c00
Model List Support
2022-02-26 12:34:07 +01:00
a22d59e191
Fix a bug with soft prompts when using transformers XGLM
2022-02-25 12:35:23 -05:00
0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
...
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
072ca87977
Load soft prompt at the end instead of inside loadsettings()
2022-02-23 21:15:08 -05:00
8120e4dfa2
Need to set vars.allowsp
to True before calling loadsettings()
2022-02-23 21:09:31 -05:00
c45ba497c9
Load settings earlier to avoid TPU badwords issues
2022-02-23 20:39:11 -05:00
ac59e55d62
Smaller optimizations
2022-02-24 01:14:26 +01:00
8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
...
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
ad10ac8871
Allow TPU models to specify settings/config in config.json
2022-02-23 18:22:18 -05:00
7de3311000
Fix sentencepiece model saving
2022-02-23 22:04:41 +01:00
fd7ba9f70e
Also check for Config in models/
2022-02-22 19:22:08 +01:00
4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
...
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
300db651de
Open models folder by default
2022-02-21 00:46:18 +01:00
da10e2dc1d
Don't crash if XGLMSinusoidalPositionalEmbedding
doesn't exist
2022-02-20 17:41:00 -05:00
5dc4969173
Temporary fix for XGLM positional embedding issues
2022-02-20 14:17:24 -05:00
a63fa3b067
Prevent transformers XGLM from stopping generation on </s>
token
2022-02-19 23:15:16 -05:00
a47e93cee7
Seperate Low Memory Mode
...
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00