Commit Graph

608 Commits

Author SHA1 Message Date
ebolam 8ae0a4a3e7 Online Services Working now (without a way to test as I don't have accounts) 2022-03-12 14:21:11 -05:00
ebolam b55e5a8e0b Retry Bug Fix 2022-03-12 10:32:27 -05:00
ebolam ae854bab3d Fix for retry causing issues for future redo actions 2022-03-11 11:40:55 -05:00
ebolam 772ae2eb80 Added model info to show model load progress in UI 2022-03-11 11:31:41 -05:00
henk717 b02d5e8696 Allows missing model_config again 2022-03-10 19:59:10 +01:00
henk717 172a548fa1 Fallback to generic GPT2 Tokenizer 2022-03-10 19:52:15 +01:00
henk717 9dee9b5c6d Ignore incorrect problems 2022-03-09 12:03:37 +01:00
henk717 a28e553412 Remove unused gettokenids 2022-03-09 11:59:33 +01:00
ebolam 0943926f6a Fix for lazy loading 2022-03-07 19:52:44 -05:00
ebolam bfc07073e3 layer count fix 2022-03-07 19:33:24 -05:00
ebolam d8ab58892d saved layer value fix 2022-03-07 19:21:55 -05:00
ebolam da53d7edb3 Custom Path Load fix 2022-03-07 18:54:11 -05:00
ebolam d1a64e25da Custom Model Load Fix 2022-03-07 18:44:37 -05:00
ebolam 70f1c2da9c Added stub for model name feedback 2022-03-07 14:20:25 -05:00
ebolam d0553779ab Bug Fix 2022-03-07 12:33:35 -05:00
ebolam c50fe77a7d Load Fix 2022-03-07 11:57:33 -05:00
ebolam 49fc854e55 Added saving of breakmodel values so that it defaults to it on next load 2022-03-07 11:49:34 -05:00
ebolam 2cf6b6e650
Merge branch 'henk717:united' into united 2022-03-07 11:31:14 -05:00
ebolam 123cd45b0e Breakmodel working now with the web UI 2022-03-07 11:27:23 -05:00
henk717 7434c9221b Expand OAI Setting Compatibility 2022-03-07 08:56:47 +01:00
ebolam 5e00f7daf0 Next evolution of web ui model selection. Custom Paths not working quite right. 2022-03-06 20:55:11 -05:00
ebolam 2ddf45141b Initial UI based model loading. Includes all parameters except breakmodel chunks, engine # for OAI, and url for ngrok url for google colab 2022-03-06 19:51:35 -05:00
ebolam f6c95f18fa
Fix for Redo (#94)
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)

* Removed debug code
2022-03-06 23:18:14 +01:00
henk717 f857696224 OAI ConfigName Bugfix 2022-03-06 20:18:42 +01:00
henk717 3ddc9647eb Basic GooseAI Support 2022-03-06 20:10:30 +01:00
henk717 daea4b8d15 Fix Breakmodel RAM Regression 2022-03-06 08:26:50 +01:00
henk717 105d3831b5 Lazy Load Float32 for CPU 2022-03-06 07:56:04 +01:00
Gnome Ann 373f7b9bd5 Don't convert tensors to float16 if using CPU-only mode 2022-03-05 14:30:26 -05:00
Gnome Ann 579e85820c Resolve merge conflict 2022-03-05 14:13:56 -05:00
Gnome Ann 2e19ea1bb6 Auto detect if we're in a Colab TPU instance 2022-03-05 14:07:23 -05:00
ebolam 4a8d7f5e0b
Merge branch 'henk717:united' into united 2022-03-05 13:25:10 -05:00
Gnome Ann 0a258a6282 Support for loading HF models on TPU with `--colab_tpu` 2022-03-05 12:33:33 -05:00
Gnome Ann 86ac562b0c Lazy loader should convert model tensors to float16 before moving them 2022-03-05 11:31:34 -05:00
ebolam 4dd119c38d Redo no longer goes through formatting function (thereby getting changed) 2022-03-05 11:15:33 -05:00
ebolam 353817b4da Remove debug print statements 2022-03-05 10:35:06 -05:00
ebolam 221f264fa7 Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty. 2022-03-05 10:31:28 -05:00
Gnome Ann a00dede610 Put the XGLM embedding patch behind a version check 2022-03-04 19:10:15 -05:00
Gnome Ann 5674516f0c Merge branch 'united' into lazy-loader 2022-03-04 18:27:51 -05:00
ebolam 5f92cbc231 Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-03-04 15:37:34 -05:00
ebolam 321f45ccad Fix debug to never crash (would on some initialization steps) 2022-03-04 15:36:13 -05:00
ebolam ee883fc4da
Merge branch 'henk717:united' into united 2022-03-04 14:15:16 -05:00
ebolam 26b9268391 Redo bug fix 2022-03-04 14:14:44 -05:00
henk717 eb247d69c3
Merge branch 'KoboldAI:main' into united 2022-03-04 18:24:56 +01:00
Gnome Ann a1fedca2c8 Use lazy loading automatically if a config file exists for the model 2022-03-04 11:11:33 -05:00
MrReplikant ae143e896c
Fixed unnecessary spacing in chatmode
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
Gnome Ann f0629958b1 Merge branch 'united' into lazy-loader 2022-03-04 00:37:25 -05:00
Gnome Ann 58a2c18821 Add lazy torch loading support to transformers backend 2022-03-04 00:33:10 -05:00
henk717 e033b04f87 Restore United 2022-03-02 11:40:50 +01:00
henk717 f9ac23ba4e Add Janeway and Shinen 2022-03-02 09:51:25 +01:00
ebolam 3f73f84b69 bug fix 2022-02-28 19:04:12 -05:00
ebolam 6003b2369b Debug and load story fix for actions_metadata variable 2022-02-28 10:39:36 -05:00
ebolam 47d102635e
Merge branch 'united' into united 2022-02-28 08:37:45 -05:00
ebolam 7803fbb137 Fixed error in redo action when editing previous entries and/or editing right after a redo 2022-02-28 08:31:26 -05:00
henk717 13fe472264 Menu Polish 2022-02-28 02:47:15 +01:00
henk717 f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
henk717 4849a30d88
Merge pull request #84 from mrseeker/patch-3
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
henk717 a466e13c00 Model List Support 2022-02-26 12:34:07 +01:00
Gnome Ann a22d59e191 Fix a bug with soft prompts when using transformers XGLM 2022-02-25 12:35:23 -05:00
Julius ter Pelkwijk 0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
Gnome Ann 072ca87977 Load soft prompt at the end instead of inside `loadsettings()` 2022-02-23 21:15:08 -05:00
Gnome Ann 8120e4dfa2 Need to set `vars.allowsp` to True before calling `loadsettings()` 2022-02-23 21:09:31 -05:00
Gnome Ann c45ba497c9 Load settings earlier to avoid TPU badwords issues 2022-02-23 20:39:11 -05:00
henk717 ac59e55d62 Smaller optimizations 2022-02-24 01:14:26 +01:00
henk717 8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann ad10ac8871 Allow TPU models to specify settings/config in config.json 2022-02-23 18:22:18 -05:00
henk717 7de3311000 Fix sentencepiece model saving 2022-02-23 22:04:41 +01:00
henk717 fd7ba9f70e Also check for Config in models/ 2022-02-22 19:22:08 +01:00
henk717 4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
henk717 300db651de Open models folder by default 2022-02-21 00:46:18 +01:00
Gnome Ann da10e2dc1d Don't crash if `XGLMSinusoidalPositionalEmbedding` doesn't exist 2022-02-20 17:41:00 -05:00
Gnome Ann 5dc4969173 Temporary fix for XGLM positional embedding issues 2022-02-20 14:17:24 -05:00
Gnome Ann a63fa3b067 Prevent transformers XGLM from stopping generation on `</s>` token 2022-02-19 23:15:16 -05:00
henk717 a47e93cee7 Seperate Low Memory Mode
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00
henk717 8e03f1c612
Merge branch 'KoboldAI:main' into united 2022-02-18 14:21:34 +01:00
henk717 f06acb59be
Add the Janeway model
New model released by Mr.Seeker
2022-02-18 14:18:41 +01:00
henk717 cba93e29d2 Update aiserver.py 2022-02-18 02:11:08 +01:00
henk717 76a6c124dd Quiet on Colab
Makes the Colab mode also automatically activate the Quiet mode to improve privacy. We should no longer need this in the colab console thanks to the redo feature. Need something different for testing? Use --remote instead.
2022-02-18 02:07:40 +01:00
henk717 02246dfc4d Remote play improvements
Change the proposed --share to --unblock to make it more apparent what this feature does. The feature unblocks the port from external access, but does not add remote play support. For remote play support without a proxy service I have added --host .
2022-02-18 01:08:12 +01:00
Gnome Ann ec54bc9d9b Fix typo in `send_debug()` 2022-02-12 20:11:35 -05:00
Gnome Ann f682c1229a Fix fairseq newline handling issues 2022-02-12 13:23:59 -05:00
ebolam 633152ee84 Fixed Retry bug due to redo/pin code 2022-02-10 10:01:07 -05:00
ebolam 586b989582 Redo bug fix 2022-02-06 18:53:24 -05:00
ebolam 98609a8abc Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-02-06 13:48:34 -05:00
ebolam 80ae054cb5
Merge branch 'henk717:united' into united 2022-02-06 13:42:59 -05:00
ebolam 9e17ea9636 Fixed model downloading problem where models were downloaded multiple times 2022-02-06 13:42:46 -05:00
henk717 c38108d818
Merge pull request #73 from VE-FORBRYDERNE/xglm-breakmodel
Breakmodel support for the fairseq models
2022-02-06 18:05:59 +01:00
ebolam 02c7ca3e84
Merge branch 'henk717:united' into united 2022-02-03 08:11:06 -05:00
ebolam 0684a221cd Changed pin icon for re-dos to be a circular arrow that is not clickable to make it clear it is a redo action and cannot be cleared. 2022-02-03 08:08:43 -05:00
henk717 3ee63b28c5 Defaults and Downloads
Default settings for the new repetition penalty settings (Better suggestions very much welcome since broader community testing has not been done).

Updated the Readme with the link to the offline installer.
2022-02-03 13:13:26 +01:00
Gnome Ann 4904af6adc Fix a mistake in the previous commit 2022-02-02 23:04:59 -05:00
Gnome Ann 78f52063c7 Fix XGLM soft prompts 2022-02-02 22:45:16 -05:00
Ben Fox e2d2ebcae6 upstream merge 2022-02-02 15:04:59 -05:00
Gnome Ann d847d04605 Fix some typos in XGLM breakmodel 2022-02-01 16:00:46 -05:00
Gnome Ann 8e1169ea61 Enable `vars.bmsupported` when using XGLM 2022-02-01 15:31:59 -05:00
Gnome Ann e7f65cee09 XGLM breakmodel 2022-02-01 13:04:35 -05:00
henk717 c14e6fe5d2 Revert parralism
Testing is done, seems to cause issues in the order things happen with the interface.
2022-02-01 18:58:48 +01:00
henk717 d68a91ecd3 Save model values
Without saving these they get lost after someone saves. So saving them is more important than the model being able to override them after the fact.
2022-02-01 18:37:52 +01:00
henk717 b8e08cdd63 Enable Tokenizer Parralism
Has proven to be safe in my internal testing and does help with the interface lag at boot.

Enabling this so it can get wider testing.
2022-02-01 12:00:53 +01:00
henk717 ecd7b328ec Further Polishing
Multiple smaller changes to get 1.17 in shape for its release.
2022-02-01 11:15:44 +01:00
henk717 36b6dcb641 Increase newlinemode compatibility
Ran into issues with other modes like chatmode and adventure, moved it further down the pipeline and converting </s> back to \n before processing additional formatting.

Still has an issue with the html formatting not working, but at least the AI works now.
2022-01-31 19:39:32 +01:00
henk717 90fd67fd16 Update aiserver.py 2022-01-31 19:06:02 +01:00
henk717 b69e3f86e1 Update aiserver.py
Removes a debug line
2022-01-31 18:57:47 +01:00
henk717 8466068267 Don't save newlinemode
On second thought, it is probably better to not save this. Advanced users can add this themselves and that way newer versions of the model can override it if redownloaded.
2022-01-31 18:41:23 +01:00
henk717 729be62821 </s> new line mode
Needed for Fairseq and XGLM models that do not understand the regular \n .
2022-01-31 18:39:34 +01:00
henk717 03433810f1 KML improvements
Don't parse > since that has a different meaning for us, also whitelisting a few more markdown tags so lists work.
2022-01-30 20:07:47 +01:00
henk717 a484244392 Welcome Message API
Allows model creators to customize the welcome message using Markdown and Limited HTML

Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
2022-01-30 19:47:30 +01:00
henk717 ddfa21e6dd Breakmodel Fixes
Multiple old references and one mistake in my last commit fixed
2022-01-30 17:40:43 +01:00
henk717 57344935f6
--model without breakmodel disables bmsupported
Last commit it only did a warning, now it will turn bmsupported off so that the GPU routine is used.
2022-01-30 17:16:35 +01:00
henk717 f0c0a990ea NoBreakmodel variable
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).

In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
2022-01-30 17:06:15 +01:00
henk717 5b5a479f29 Threading + Memory Sizes
Polish effort to suppress a warning and list more accurate VRAM as tested with the full 2048 max tokens.
2022-01-30 13:56:25 +01:00
henk717 fca7f8659f Badwords unification
TPU's no longer use hardcoded badwords but instead use the var
2022-01-29 18:09:53 +01:00
henk717 f9f25c01e4 HTML escape the last commit
</s> didn't work, needed to be HTML escaped (Thanks for the tip VE!)
2022-01-28 19:21:05 +01:00
henk717 be0e57185f Improved Model Support
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.

More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
2022-01-28 18:03:30 +01:00
ebolam 1470b1666d Fixed single gen redo 2022-01-27 20:17:13 -05:00
ebolam 2278b7c103 Changed behavior of redo if there is only 1 option to just select it 2022-01-26 21:07:55 -05:00
ebolam 06bbe429d9 Bug fix for redo/pinning persisting over new game requests 2022-01-26 21:02:36 -05:00
ebolam b0f1bdf2fd
Merge branch 'henk717:united' into united 2022-01-26 11:27:12 -05:00
henk717 987e78f980 More loading fixes
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
2022-01-25 06:39:23 +01:00
Gnome Ann 3f18888eec Repetition penalty slope and range 2022-01-24 15:30:38 -05:00
ebolam bd0732fbd6 Fix for redo with options.
Added debug menu
2022-01-24 12:54:44 -05:00
ebolam 47ec22873d bug-fix if settings directory is a symlink. 2022-01-22 21:43:32 -05:00
ebolam f54f46b068 bugfix for metadata saving 2022-01-22 20:30:14 -05:00
henk717 0846d57368 0.17 polish 2022-01-23 01:05:09 +01:00
ebolam bdd358f40f Merge branch 'united' of https://github.com/henk717/KoboldAI into henk717-united 2022-01-22 17:57:33 -05:00
henk717 c9999b6388
Merge pull request #70 from VE-FORBRYDERNE/patch
Don't throw an error in `update_story_chunk` if you try to edit a nonexistent chunk
2022-01-22 23:24:34 +01:00
henk717 4e7440804c
Merge pull request #69 from VE-FORBRYDERNE/lua
Lua compatibility enhancements
2022-01-22 23:23:47 +01:00
henk717 f79db7059a Fall back to old json load
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.
2022-01-22 23:21:19 +01:00
ebolam 9df758c1f4 added quiet option to suppress any story text from showing in the console (reduce logs when running in a docker container) 2022-01-22 15:30:56 -05:00
ebolam 12e7b6d10b Added --share command line parameter so we can set host=0.0.0.0 on local instances without editing code
moved save location of downloaded models to models/XXXXXX so we can more easily set this as a volume in docker
2022-01-22 14:47:28 -05:00
Gnome Ann bf2b02d366 Don't error in `update_story_chunk` if chunk index doesn't exist 2022-01-21 21:19:32 -05:00
ebolam 2010e7b9bc Added saveas option for saving without metadata information
Fixed redo on an empty story erroring
Fixed redo when you're at the current end of a chain causing an error
2022-01-21 19:02:56 -05:00
Gnome Ann fab0913270 Call `setgamesaved(False)` in `update_story_chunk` and `remove_story_chunk` 2022-01-21 16:39:51 -05:00
ebolam d31fb278ce Working redo and pin options 2022-01-21 15:30:37 -05:00
ebolam fcaacf636d
Merge branch 'henk717:united' into united 2022-01-21 07:40:25 -05:00
Ben Fox 03d54364f4 Initial commit of the actions metadata variable population 2022-01-20 15:18:43 -05:00
Gnome Ann 72a7aac2c7 Sync memory properly after random game request 2022-01-20 15:14:55 -05:00
ebolam dffd00265b Added autosave feature. When action is submitted it will save if the save setting is on and if the filename is set. 2022-01-20 07:46:34 -05:00
henk717 9532b56cb8 Universal Model Settings
No longer depends on a local config file enabling the configuration to work in --colab mode.
2022-01-20 10:11:11 +01:00
Gnome Ann c703729f0b Set eventlet threadpool size back to 1 2022-01-20 02:10:57 -05:00
Gnome Ann f0c39c004a Deleting world info entries should call `setgamesaved(False)` 2022-01-18 19:36:20 -05:00
henk717 4ca06ebcf3
Merge pull request #65 from VE-FORBRYDERNE/sp
Show author and SP length in soft prompt menu
2022-01-18 23:51:02 +01:00
henk717 1e0f9ada08
Add adventure 2.7B
Its on Huggingface now, so lets add it to the menu!
2022-01-18 23:50:21 +01:00
Gnome Ann 3018322963 Detect and show properly when story is unsaved 2022-01-18 17:20:45 -05:00
Gnome Ann 1951ccd2ce Show author and SP length in soft prompt menu 2022-01-18 16:30:09 -05:00
Gnome Ann 4da1a2d247 Prevent tokenizer from taking extra time the first time it's used 2022-01-17 22:55:25 -05:00
Gnome Ann 703c092577 Fix settings callback, and `genout.shape[-1]` in `tpumtjgenerate()` 2022-01-17 14:52:29 -05:00
Gnome Ann 3ba0e3f9d9 Dynamic TPU backend should support dynamic warpers and abort button 2022-01-17 14:10:32 -05:00
Gnome Ann 6502af086f Use `vars._actions` in `tpumtjgenerate` and its callbacks 2022-01-17 13:24:11 -05:00
Gnome Ann 45bfde8d5d `generated_cols` needs to be set properly by TPU static backend 2022-01-17 13:19:57 -05:00
Gnome Ann 9594b2db1c Fix soft prompt length calculation in `calcsubmitbudget()`
In TPU instances, `vars.sp.shape[0]` is not always the actual number of
tokens in the soft prompt. We have to use `vars.sp_length` to get an
accurate token count.
2022-01-17 13:17:20 -05:00