Commit Graph

493 Commits

Author SHA1 Message Date
ebolam b55e5a8e0b Retry Bug Fix 2022-03-12 10:32:27 -05:00
ebolam ae854bab3d Fix for retry causing issues for future redo actions 2022-03-11 11:40:55 -05:00
henk717 b02d5e8696 Allows missing model_config again 2022-03-10 19:59:10 +01:00
henk717 172a548fa1 Fallback to generic GPT2 Tokenizer 2022-03-10 19:52:15 +01:00
henk717 9dee9b5c6d Ignore incorrect problems 2022-03-09 12:03:37 +01:00
henk717 a28e553412 Remove unused gettokenids 2022-03-09 11:59:33 +01:00
henk717 7434c9221b Expand OAI Setting Compatibility 2022-03-07 08:56:47 +01:00
ebolam f6c95f18fa
Fix for Redo (#94)
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)

* Removed debug code
2022-03-06 23:18:14 +01:00
henk717 f857696224 OAI ConfigName Bugfix 2022-03-06 20:18:42 +01:00
henk717 3ddc9647eb Basic GooseAI Support 2022-03-06 20:10:30 +01:00
henk717 daea4b8d15 Fix Breakmodel RAM Regression 2022-03-06 08:26:50 +01:00
henk717 105d3831b5 Lazy Load Float32 for CPU 2022-03-06 07:56:04 +01:00
Gnome Ann 373f7b9bd5 Don't convert tensors to float16 if using CPU-only mode 2022-03-05 14:30:26 -05:00
Gnome Ann 579e85820c Resolve merge conflict 2022-03-05 14:13:56 -05:00
Gnome Ann 2e19ea1bb6 Auto detect if we're in a Colab TPU instance 2022-03-05 14:07:23 -05:00
ebolam 4a8d7f5e0b
Merge branch 'henk717:united' into united 2022-03-05 13:25:10 -05:00
Gnome Ann 0a258a6282 Support for loading HF models on TPU with `--colab_tpu` 2022-03-05 12:33:33 -05:00
Gnome Ann 86ac562b0c Lazy loader should convert model tensors to float16 before moving them 2022-03-05 11:31:34 -05:00
ebolam 4dd119c38d Redo no longer goes through formatting function (thereby getting changed) 2022-03-05 11:15:33 -05:00
ebolam 353817b4da Remove debug print statements 2022-03-05 10:35:06 -05:00
ebolam 221f264fa7 Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty. 2022-03-05 10:31:28 -05:00
Gnome Ann a00dede610 Put the XGLM embedding patch behind a version check 2022-03-04 19:10:15 -05:00
Gnome Ann 5674516f0c Merge branch 'united' into lazy-loader 2022-03-04 18:27:51 -05:00
ebolam 5f92cbc231 Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-03-04 15:37:34 -05:00
ebolam 321f45ccad Fix debug to never crash (would on some initialization steps) 2022-03-04 15:36:13 -05:00
ebolam ee883fc4da
Merge branch 'henk717:united' into united 2022-03-04 14:15:16 -05:00
ebolam 26b9268391 Redo bug fix 2022-03-04 14:14:44 -05:00
henk717 eb247d69c3
Merge branch 'KoboldAI:main' into united 2022-03-04 18:24:56 +01:00
Gnome Ann a1fedca2c8 Use lazy loading automatically if a config file exists for the model 2022-03-04 11:11:33 -05:00
MrReplikant ae143e896c
Fixed unnecessary spacing in chatmode
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
Gnome Ann f0629958b1 Merge branch 'united' into lazy-loader 2022-03-04 00:37:25 -05:00
Gnome Ann 58a2c18821 Add lazy torch loading support to transformers backend 2022-03-04 00:33:10 -05:00
henk717 e033b04f87 Restore United 2022-03-02 11:40:50 +01:00
henk717 f9ac23ba4e Add Janeway and Shinen 2022-03-02 09:51:25 +01:00
ebolam 3f73f84b69 bug fix 2022-02-28 19:04:12 -05:00
ebolam 6003b2369b Debug and load story fix for actions_metadata variable 2022-02-28 10:39:36 -05:00
ebolam 47d102635e
Merge branch 'united' into united 2022-02-28 08:37:45 -05:00
ebolam 7803fbb137 Fixed error in redo action when editing previous entries and/or editing right after a redo 2022-02-28 08:31:26 -05:00
henk717 13fe472264 Menu Polish 2022-02-28 02:47:15 +01:00
henk717 f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
henk717 4849a30d88
Merge pull request #84 from mrseeker/patch-3
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
henk717 a466e13c00 Model List Support 2022-02-26 12:34:07 +01:00
Gnome Ann a22d59e191 Fix a bug with soft prompts when using transformers XGLM 2022-02-25 12:35:23 -05:00
Julius ter Pelkwijk 0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
Gnome Ann 072ca87977 Load soft prompt at the end instead of inside `loadsettings()` 2022-02-23 21:15:08 -05:00
Gnome Ann 8120e4dfa2 Need to set `vars.allowsp` to True before calling `loadsettings()` 2022-02-23 21:09:31 -05:00
Gnome Ann c45ba497c9 Load settings earlier to avoid TPU badwords issues 2022-02-23 20:39:11 -05:00
henk717 ac59e55d62 Smaller optimizations 2022-02-24 01:14:26 +01:00
henk717 8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann ad10ac8871 Allow TPU models to specify settings/config in config.json 2022-02-23 18:22:18 -05:00