472 Commits

Author SHA1 Message Date
ebolam
4a8d7f5e0b
Merge branch 'henk717:united' into united 2022-03-05 13:25:10 -05:00
ebolam
4dd119c38d Redo no longer goes through formatting function (thereby getting changed) 2022-03-05 11:15:33 -05:00
ebolam
353817b4da Remove debug print statements 2022-03-05 10:35:06 -05:00
ebolam
221f264fa7 Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty. 2022-03-05 10:31:28 -05:00
Gnome Ann
a00dede610 Put the XGLM embedding patch behind a version check 2022-03-04 19:10:15 -05:00
ebolam
5f92cbc231 Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-03-04 15:37:34 -05:00
ebolam
321f45ccad Fix debug to never crash (would on some initialization steps) 2022-03-04 15:36:13 -05:00
ebolam
ee883fc4da
Merge branch 'henk717:united' into united 2022-03-04 14:15:16 -05:00
ebolam
26b9268391 Redo bug fix 2022-03-04 14:14:44 -05:00
henk717
eb247d69c3
Merge branch 'KoboldAI:main' into united 2022-03-04 18:24:56 +01:00
MrReplikant
ae143e896c
Fixed unnecessary spacing in chatmode
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
henk717
e033b04f87 Restore United 2022-03-02 11:40:50 +01:00
henk717
f9ac23ba4e Add Janeway and Shinen 2022-03-02 09:51:25 +01:00
ebolam
3f73f84b69 bug fix 2022-02-28 19:04:12 -05:00
ebolam
6003b2369b Debug and load story fix for actions_metadata variable 2022-02-28 10:39:36 -05:00
ebolam
47d102635e
Merge branch 'united' into united 2022-02-28 08:37:45 -05:00
ebolam
7803fbb137 Fixed error in redo action when editing previous entries and/or editing right after a redo 2022-02-28 08:31:26 -05:00
henk717
13fe472264 Menu Polish 2022-02-28 02:47:15 +01:00
henk717
f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
henk717
4849a30d88
Merge pull request #84 from mrseeker/patch-3
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
henk717
a466e13c00 Model List Support 2022-02-26 12:34:07 +01:00
Gnome Ann
a22d59e191 Fix a bug with soft prompts when using transformers XGLM 2022-02-25 12:35:23 -05:00
Julius ter Pelkwijk
0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
Gnome Ann
072ca87977 Load soft prompt at the end instead of inside loadsettings() 2022-02-23 21:15:08 -05:00
Gnome Ann
8120e4dfa2 Need to set vars.allowsp to True before calling loadsettings() 2022-02-23 21:09:31 -05:00
Gnome Ann
c45ba497c9 Load settings earlier to avoid TPU badwords issues 2022-02-23 20:39:11 -05:00
henk717
ac59e55d62 Smaller optimizations 2022-02-24 01:14:26 +01:00
henk717
8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann
ad10ac8871 Allow TPU models to specify settings/config in config.json 2022-02-23 18:22:18 -05:00
henk717
7de3311000 Fix sentencepiece model saving 2022-02-23 22:04:41 +01:00
henk717
fd7ba9f70e Also check for Config in models/ 2022-02-22 19:22:08 +01:00
henk717
4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
henk717
300db651de Open models folder by default 2022-02-21 00:46:18 +01:00
Gnome Ann
da10e2dc1d Don't crash if XGLMSinusoidalPositionalEmbedding doesn't exist 2022-02-20 17:41:00 -05:00
Gnome Ann
5dc4969173 Temporary fix for XGLM positional embedding issues 2022-02-20 14:17:24 -05:00
Gnome Ann
a63fa3b067 Prevent transformers XGLM from stopping generation on </s> token 2022-02-19 23:15:16 -05:00
henk717
a47e93cee7 Seperate Low Memory Mode
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00
henk717
8e03f1c612
Merge branch 'KoboldAI:main' into united 2022-02-18 14:21:34 +01:00
henk717
f06acb59be
Add the Janeway model
New model released by Mr.Seeker
2022-02-18 14:18:41 +01:00
henk717
cba93e29d2 Update aiserver.py 2022-02-18 02:11:08 +01:00
henk717
76a6c124dd Quiet on Colab
Makes the Colab mode also automatically activate the Quiet mode to improve privacy. We should no longer need this in the colab console thanks to the redo feature. Need something different for testing? Use --remote instead.
2022-02-18 02:07:40 +01:00
henk717
02246dfc4d Remote play improvements
Change the proposed --share to --unblock to make it more apparent what this feature does. The feature unblocks the port from external access, but does not add remote play support. For remote play support without a proxy service I have added --host .
2022-02-18 01:08:12 +01:00
Gnome Ann
ec54bc9d9b Fix typo in send_debug() 2022-02-12 20:11:35 -05:00
Gnome Ann
f682c1229a Fix fairseq newline handling issues 2022-02-12 13:23:59 -05:00
ebolam
633152ee84 Fixed Retry bug due to redo/pin code 2022-02-10 10:01:07 -05:00
ebolam
586b989582 Redo bug fix 2022-02-06 18:53:24 -05:00
ebolam
98609a8abc Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-02-06 13:48:34 -05:00
ebolam
80ae054cb5
Merge branch 'henk717:united' into united 2022-02-06 13:42:59 -05:00
ebolam
9e17ea9636 Fixed model downloading problem where models were downloaded multiple times 2022-02-06 13:42:46 -05:00
henk717
c38108d818
Merge pull request #73 from VE-FORBRYDERNE/xglm-breakmodel
Breakmodel support for the fairseq models
2022-02-06 18:05:59 +01:00