Commit Graph

1492 Commits

Author SHA1 Message Date
Henk
640bd64037 Revision Fixes (And Var Workaround) 2023-01-31 04:00:35 +01:00
Henk
b276c5ff15 EOS Hiding Workaround (For models that want EOS behavior) 2023-01-31 01:46:55 +01:00
Concedo
d9e4b74e37 Adding CORS support to allow cross origin requests. Added new dependency for package flask-cors required.package 2023-01-30 12:57:58 +08:00
ebolam
5aca142034 Added blank image. Shows when action is selected and there is no action image for that action 2023-01-19 15:55:10 -05:00
ebolam
33f7478bf4 Merge branch 'UI2' of https://github.com/ebolam/KoboldAI into UI2 2023-01-19 08:06:46 -05:00
ebolam
bf665838e0 Potential desync fix 2023-01-19 08:06:40 -05:00
SammCheese
2e93b12aff add a info reference for the model selection 2023-01-15 20:50:00 +01:00
Henk
ed62d104ee --cacheonly 2023-01-14 21:14:39 +01:00
Henk
469fb8a5fe Transformers 4.25.1
This is a breaking change that allows 4.25.1 to work because they also have done breaking changes. If you do not make use of our automatic updater please update the dependencies when updating to this build.
2023-01-13 19:11:21 +01:00
Henk
f1739dd184 Chatmode Regex 2023-01-13 13:04:54 +01:00
Henk
d6a941de61 Restore Chat Models Menu + Safetensors Workaround
This commit restores the chat models menu now we finally have good chat models available again.

Unfortunately huggingface reports back pytorch_model.bin even if the model's name is model.safetensors. I don't have a good way to combat this at the moment, so instead we now do a hack where if the model copy fails it manually tries model.safetensors instead hoping that it will work.

This fixes Pygmalion for now, if new issues arise from this in the future from other models we have to implement a cleaner method.
2023-01-11 22:24:12 +01:00
Henk
271e4ed06b Chat Mode Improvements
This commit decouples single line mode, well behaved models no longer need this since we stop at the You:.

There are scenario's however where this potentially breaks chatmode completely or makes models more frustrating to use. Users who experience this can enable the Single Line mode in the formatting menu to restore the old behavior.

I have also allowed token streaming again, since the issues with it have already been resolved.
2023-01-11 21:33:25 +01:00
ebolam
489c1ffd80 Correction to UI2 UI to redirect to API when --no_ui is present. 2023-01-10 16:06:21 -05:00
ebolam
5d8485273a Remove really old debug message for chat mode 2023-01-10 08:58:09 -05:00
ebolam
9b1138bafa Added in alternative rep pen calculation (log instead of linear application) as an option. 2023-01-10 08:45:55 -05:00
ebolam
3b551ec7ab Added custom model path for model selection. Shows the custom model input box for the user to enter a path and/or a hugging face model 2023-01-09 13:02:57 -05:00
henk717
86f1694290 Merge pull request #254 from one-some/united
Fix image retry
2023-01-09 01:34:13 +01:00
somebody
ac5c001fc3 Fix image retry 2023-01-08 18:27:23 -06:00
henk717
535b9bdfdf Merge pull request #251 from one-some/bias-improvements
Bias improvements
2023-01-08 17:45:41 +01:00
somebody
f6c4bfb390 New bias token control (and lack thereof)
Allows square bracket syntax for using token ids, curly bracket syntax
for strict phrasing, and normal now biases alternative phrases with
space prefixes
2023-01-07 23:59:46 -06:00
somebody
76c1398917 Small cleanup with imports
and unused threading code
2023-01-07 20:19:31 -06:00
somebody
d298915182 Disable load model button on colab/when prompted
Currently switching models makes the TPU cry and we do not want to upset
our dearest hardware friend.

In the future it'd probably be best to teach the TPU how to not be
afraid of switching models, provided we have some limitations that
prevent loading itty bitty or way too big models on the TPU.
2023-01-07 13:38:27 -06:00
henk717
08e72d0f4d Merge pull request #244 from pi6am/fix/load-non-started
Fix loading a non-started V1 story json in UI2
2023-01-06 16:29:52 +01:00
Llama
ae215b94d5 ix loading a non-started V1 story json in UI2
There was an exception loading or importing a V1 story json into UI2 if
the story had zero actions. Do to incorrect indendation, the V1 load
process would attempt to use an uninitialized temp_story_class variable.

However, even after fixing this, it was impossible to continue playing
the saved story, because it was not possible to submit a blank action
when a story only had a prompt and zero actions. Modify the logic around
the first submission to handle a non-empty initial prompt. This allows
the player to begin playing the loaded game.
2023-01-05 23:42:24 -08:00
somebody
ab0277549c Merge branch 'united' of https://github.com/henk717/KoboldAI into screenshot 2023-01-02 12:14:38 -06:00
henk717
001d4e6391 Merge pull request #238 from one-some/ui2-edit-fixes
New frontend bugfixes
2023-01-02 16:25:09 +01:00
Llama
48dd451a8c Add the ability to insert the story prompt into the art guide.
The art guide now optionally supports the sequence <|>
If this exists in the art guide, <|> is replaced by the story summary.
Otherwise, the art guide is appended to the summary as before.
Update the default art guide to place the medium before the summary.
Also update the art guide tooltip to include the new default.
2023-01-01 21:12:25 -08:00
somebody
e772860d18 Add endpoint to get action composition 2022-12-27 17:33:46 -06:00
somebody
941f86145e Track original text 2022-12-27 16:36:28 -06:00
somebody
901322b66b Make settings work 2022-12-27 15:22:57 -06:00
somebody
90bd7c8d71 Integrate actual images 2022-12-26 22:54:51 -06:00
ebolam
092c31b0eb Add message for new features when opening for the first time. 2022-12-21 19:55:33 -05:00
somebody
7ff6c409f7 Fix probabilities on UI1 2022-12-21 17:23:35 -06:00
ebolam
f65a5fdacb Fix for UI1 soft prompt loading 2022-12-21 15:21:24 -05:00
ebolam
1409682427 Fix for some UI1 syncing issues 2022-12-21 13:41:59 -05:00
ebolam
6bb7b4a1b1 Added warning message to UI1 and updated message 2022-12-21 12:58:36 -05:00
ebolam
bc9249adfc Added one time warning system (Currently used to warn users of new save format) 2022-12-21 12:22:26 -05:00
ebolam
7521091e7c Changed alt_multi_gen to experimental and UI2 only 2022-12-21 10:25:50 -05:00
ebolam
5315a94dad vars fix from merge 2022-12-20 21:25:14 -05:00
ebolam
1500d07c38 Merge commit 'refs/pull/352/head' of https://github.com/ebolam/KoboldAI into UI2 2022-12-20 21:24:06 -05:00
ebolam
4b53126d29 Moved functional models (stable diffusion and summarizer) to new directory 2022-12-20 21:17:01 -05:00
ebolam
5df401f56e Fix for Original UI model load status box not disapearing after loading
Fix for gpt2 model load error
Moved horde bridge to a thread using modified bridge code (will be broken until push is accepted into db0's git)
2022-12-20 20:35:21 -05:00
somebody
149ec6b49d Gracefully report back if horde bans anonymous generation 2022-12-19 20:27:31 -06:00
somebody
8d113bf0aa Handle no action id 2022-12-19 20:22:33 -06:00
somebody
9ce72e7071 Fix crash when manually generating images 2022-12-19 20:18:41 -06:00
ebolam
2c47982f3b Merge pull request #349 from one-some/ui2-tpu-fix-again
TPU Fix
2022-12-18 18:46:05 -05:00
somebody
338e8b4049 Fix TPU 2022-12-18 16:41:20 -06:00
somebody
bf82f257d1 Remove tpumtjgenerate
Dead code as far as I can tell. Now handled in tpu_raw_generate
2022-12-18 15:29:17 -06:00
ebolam
ba0ece7263 Changed story migration behavior to leave v1 stories in place and ignore stories that have a same name folder rather than moving the file to the new structure. 2022-12-18 16:24:36 -05:00
ebolam
c46dd588be Fix for resetting models 2022-12-18 11:11:51 -05:00