Commit Graph

703 Commits

Author SHA1 Message Date
Gnome Ann 8742453f95 Add safeguards for token budget and text formatting
* Error messages are now shown when memory, author's note, etc. exceeds
  budget by itself
* Formatting options no longer break if there are empty chunks in the
  story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717 261e8b67dc Windows Color Workaround
Different approach that activates this in windows, hopefully now Linux compatible.
2021-12-26 22:46:15 +01:00
henk717 1107a1386d Enable Terminal Colors
Enable Windows Terminal color support based on feedback from Jojorne. If this gives issues on Linux we can move it to play.bat instead.
2021-12-26 22:22:24 +01:00
Gnome Ann 32a0d7c453 More Lua API fixes
* Removed `vars.model_orig`
* `requirex()` in bridge.lua now maintains a separate module cache for each
  userscript instead of using the same cache for all userscripts
* `vars.lua_deleted` and `vars.lua_edited` are now erased right before running
  the input modifiers instead of right before each time the generation modifiers
  are run
2021-12-26 12:49:28 -05:00
henk717 b9729749ba Update aiserver.py 2021-12-26 02:01:57 +01:00
henk717 ddd9bded30 Store chatname
Also save the chatname in the settings for later re-use by the user.
2021-12-26 01:55:27 +01:00
henk717 d234f67a90 Chat Mode
The Initial commit for Chat Mode, the nickname part of the UI is missing other than that it should be fully functional. To use Chat Mode effectively you first input a small dialogue (Can be around 6 lines 3 of your own inputs and 3 of the character) formatted as Name : it will then automate the actions needed to chat properly. During this mode single line mode is forced on, and Trim Incomplete Sentences is forced off.
2021-12-26 01:51:32 +01:00
henk717 14e5fcd355 AutoTokenizer 2021-12-25 00:48:12 +01:00
henk717 e1cd34268b AutoTokenizer
Futureproofing for future tokenizers, for now this is not needed since everything uses GPT2. But when that changes we want to be prepared. Not all models have a proper tokenizer config, so if we can't find one we fall back to GPT2.
2021-12-25 00:44:26 +01:00
henk717 00a0cea077 Update aiserver.py 2021-12-24 23:15:01 +01:00
henk717 1d3370c995 Community model
First batch, will be more, we will also need to update the other VRAM display's with the changes that have happened. Will happen depending on how the 8-bit stuff goes.
2021-12-24 23:12:14 +01:00
henk717 6952938e88 Update aiserver.py
Crash fix
2021-12-24 19:57:08 +01:00
Gnome Ann b2def30d9d Update Lua modeltype and model API 2021-12-23 16:35:52 -05:00
Gnome Ann c305076cf3 Fix the behaviour of the `softprompt` setting 2021-12-23 16:14:09 -05:00
Gnome Ann 00f611b207 Save softprompt filename in settings 2021-12-23 13:02:11 -05:00
Gnome Ann 924c48a6d7 Merge branch 'united' into gui-and-scripting 2021-12-23 13:02:01 -05:00
henk717 cae0f279e2 Restore Lowmem
Accidentally got replaced in one of my test runs
2021-12-23 18:50:01 +01:00
henk717 25a6e489c1 Remove Replace from Huggingface
Accidentally ended up in the wrong section, for downloads we do not replace anything only afterwards.
2021-12-23 17:27:09 +01:00
henk717 e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717 9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717 be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
Gnome Ann 8452940597 Merge branch 'united' into gui-and-scripting 2021-12-23 00:18:11 -05:00
Gnome Ann 2a4d7448be Make `dynamic_processor_wrap` execute warper conditionally
The top-k warper doesn't work properly with an argument of 0, so there
is now the ability to not execute the warper if a condition is not met.
2021-12-22 23:46:25 -05:00
Gnome Ann 7e06c25011 Display icons for active userscripts and softprompts
Also fixes the userscript menu so that the active userscripts preserve
the previously selected order as was originally intended.
2021-12-22 23:33:27 -05:00
henk717 a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717 4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717 2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717 81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
Gnome Ann c549ea04a9 Always use all logit warpers
Now that the logit warper parameters can be changed mid-generation by
generation modifiers, the logit warpers have to be always on.
2021-12-22 17:29:07 -05:00
Gnome Ann 1e1b45d47a Add support for multiple library paths in bridge.lua 2021-12-22 14:24:31 -05:00
Gnome Ann fc04ff3a08 World info folders can now be collapsed by clicking on the folder icon 2021-12-22 13:12:35 -05:00
Gnome Ann d538782b1e Add individual configuration files for userscripts 2021-12-22 02:59:31 -05:00
Gnome Ann 380b54167a Make transformers warpers dynamically update their parameters
So that if you change, e.g., `top_p`, from a Lua generation modifier or
from the settings menu during generation, the rest of the generation
will use the new setting value instead of retaining the settings it had
when generation began.
2021-12-21 22:12:24 -05:00
Gnome Ann caef3b7460 Disable `low_cpu_mem_usage` when using GPT-2
Attempting to use transformers 4.11.0's experimental `low_cpu_mem_usage`
feature with GPT-2 models usually results in the output repeating a
token over and over or otherwise containing an incoherent response.
2021-12-20 19:54:19 -05:00
henk717 7b56940ed7
Merge pull request #47 from VE-FORBRYDERNE/scripting
Lua API fixes
2021-12-20 04:32:25 +01:00
Gnome Ann 341b153360 Lua API fixes
* `print()` and `warn()` now work correctly with `nil` arguments
* Typo: `gpt-neo-1.3M` has been corrected to `gpt-neo-1.3B`
* Regeneration is no longer triggered when writing to `keysecondary` of
  a non-selective key
* Handle `genamt` changes in generation modifier properly
* Writing to `kobold.settings.numseqs` from a generation modifier no
  longer affects
* Formatting options in `kobold.settings` have been fixed
* Added aliases for setting names
* Fix behaviour of editing story chunks from a generation modifier
* Warnings are now yellow instead of red
* kobold.logits is now the raw logits prior to being filtered, like
  the documentation says, rather than after being filtered
* Some erroneous comments and error messages have been corrected
* These parts of the API have now been implemented properly:
    * `compute_context()` methods
    * `kobold.authorsnote`
    * `kobold.restart_generation()`
2021-12-19 20:18:28 -05:00
Gnome Ann 6aba869fb7 Make sure uninitialized WI entries are given UIDs when loading saves 2021-12-18 18:00:06 -05:00
Gnome Ann 769333738d Fix behaviour of `kobold.outputs` with read-only and no prompt gen 2021-12-17 12:59:01 -05:00
henk717 6d9063fb8b No Prompt Gen
Allow people to enter a prompt without generating anything by the AI. Combined with the always add prompt this is a very useful feature that allows people to write world information first, and then do a specific action. This mimics the behavior previously seen in AI Dungeon forks where it prompts for world information and then asks an action and can be particularly useful for people who want the prompt to always be part of the generation.
2021-12-16 12:47:44 +01:00
henk717 f3b4ecabca
Merge pull request #44 from VE-FORBRYDERNE/patch
Fix an error that occurs when all layers are on second GPU
2021-12-16 01:43:03 +01:00
henk717 e3d9c2d690 New download machanism
Automatically converts Huggingface cache models to full models on (down)load.
WARNING: Does wipe old cache/ dir inside the KoboldAI folder, make a backup before you run these models if you are bandwith constraint.
2021-12-16 01:40:04 +01:00
Gnome Ann 19d2356253 Fix an error that occurs when all layers are on second GPU 2021-12-15 19:03:49 -05:00
henk717 5e3e3f3578 Fix float16 models
Forcefully convert float16 models to work on the CPU
2021-12-16 00:31:51 +01:00
Gnome Ann 9097aac4a8 Show full stack trace for generator errors to help in diagnosing errors 2021-12-15 02:03:08 -05:00
Gnome Ann 2687135e05 Fix a strange bug where max tokens was capped at 1024
This seems to be related to the model config files, because only certain
models have this problem, and replacing ALL configuration files of a
"bad" model with those of a "good" model of the same type would fix the
problem.

Shouldn't be required anymore.
2021-12-15 00:45:41 -05:00
Gnome Ann 1551c45ba4 Prevent dynamic scanning from generating too many tokens 2021-12-14 23:39:04 -05:00
Gnome Ann 629988ce13 Fix a problem with the Lua regeneration API
It was an egregious typo that caused tokens to be rearranged on
regeneration.
2021-12-14 23:04:03 -05:00
henk717 6670168a47 Update aiserver.py 2021-12-14 16:26:23 +01:00
Gnome Ann 6e6e0b2b4d Allow Lua to stop generation from input modifier 2021-12-13 19:32:01 -05:00
Gnome Ann e9ed8602b2 Add a "corescript" setting 2021-12-13 19:28:33 -05:00
Gnome Ann e5bb20cc8f Fix Lua regeneration system 2021-12-13 19:17:18 -05:00
Gnome Ann 462040ed6f Restore missing `loadsettings()` call 2021-12-13 18:39:33 -05:00
Gnome Ann 661cca63e8 Make sure stopping criteria still work with dynamic scan off 2021-12-13 18:10:51 -05:00
Gnome Ann 338d437ea3 Use eventlet instead of gevent-websocket 2021-12-13 17:19:04 -05:00
Gnome Ann 34c52a1a23 Remove escape characters from all error messages 2021-12-13 11:47:34 -05:00
Gnome Ann 11f9866dbe Enable more of the IO library in Lua sandbox
Also changes the Lua warning color to red.
2021-12-13 11:22:58 -05:00
Gnome Ann 28e86563b8 Change `self.scores` to `scores` in aiserver.py 2021-12-13 11:18:01 -05:00
Gnome Ann 82e149ee02 Catch Lua errors properly 2021-12-13 02:32:09 -05:00
Gnome Ann 5f06d20085 Format Lua printed messages and warnings 2021-12-13 01:59:53 -05:00
Gnome Ann d2f5544468 Add Userscripts menu into GUI 2021-12-13 01:03:26 -05:00
Gnome Ann 5d13339a52 Allow the retry button to call the Lua scripts properly 2021-12-12 20:48:10 -05:00
Gnome Ann 39bfb0862a Allow user input to be modified from Lua
Also adds some handlers in the Lua code for when the game is not started
yet
2021-12-12 20:44:03 -05:00
Gnome Ann fbf3e7615b Add API for generated tokens and output text 2021-12-12 19:27:20 -05:00
Gnome Ann ceabd2ef7b Add Lua API for editing logits during generation
TPU backend not supported yet.
2021-12-12 16:18:45 -05:00
Gnome Ann e2c3ac041b Complete the Lua generation halting API 2021-12-12 12:52:03 -05:00
Gnome Ann d76dd35791 Add Lua API for reading model information 2021-12-12 12:09:59 -05:00
Gnome Ann 00eb125ad0 Allow Lua API to toggle dynamic scan 2021-12-12 01:55:46 -05:00
Gnome Ann 5692a7dfe2 Add Lua API for reading the text the user submitted to the AI 2021-12-12 01:52:42 -05:00
Gnome Ann 03453c4e27 Change script directory tree
Userscripts have been moved from /scripts/userscripts to /userscripts.

Core scripts have been moved from /scripts/corescripts to /cores.
2021-12-11 23:46:30 -05:00
Gnome Ann 36209bfe69 Add Lua API for story chunks 2021-12-11 23:44:07 -05:00
Gnome Ann 8e6a62259e Fix the Lua tokenizer API 2021-12-11 21:24:34 -05:00
Gnome Ann 67974947b2 Fix numerous problems in the Lua world info API 2021-12-11 19:11:38 -05:00
Gnome Ann 3327f1b471 Fix Lua settings API 2021-12-11 17:01:41 -05:00
Gnome Ann f8aa578f41 Enable generation modifiers for transformers backend only 2021-12-11 16:28:25 -05:00
Gnome Ann e289a0d360 Connect bridge.lua to aiserver.py
Also enables the use of input modifiers and output modifiers, but not
generation modifiers.
2021-12-11 12:45:45 -05:00
Gnome Ann 35966b2007 Upload bridge.lua, default.lua and some Lua libs
base64
inspect
json.lua
Lua-hashings
Lua-nums
Moses
mt19937ar-lua
Penlight
Serpent
2021-12-10 19:45:57 -05:00
Gnome Ann 683bcb824f Merge branch 'united' into world-info 2021-12-05 13:06:32 -05:00
Gnome Ann 6d8517e224 Fix some minor coding errors 2021-12-05 11:39:59 -05:00
Gnome Ann 150ce033c9 TPU backend no longer needs to recompile after changing softprompt 2021-12-05 02:49:15 -05:00
Gnome Ann b99ac92a52 WI folders and WI drag-and-drop 2021-12-04 23:59:28 -05:00
henk717 44d8068bab Ngrok Support
Not recommended for home users due to DDoS risks, but might make Colab tunnels more reliable.
2021-11-29 18:11:14 +01:00
Gnome Ann 9f51c42dd4 Allow bad words filter to ban <|endoftext|> token
The official transformers bad words filter doesn't allow this by
default. Finetune's version does allow this by default, however.
2021-11-27 11:42:06 -05:00
henk717 2bc93ba37a
Whitelist 6B in breakmodel
Now that we properly support it, allow the menu option to use breakmodel
2021-11-27 10:09:54 +01:00
henk717 b56ee07ffa
Fix for CPU mode
Recent optimizations caused the CPU version to load in an incompatible format, now we convert it back to the correct format after loading it efficiently first.
2021-11-27 05:34:29 +01:00
Gnome Ann e5e2fb088a Remember to actually import `GPTJModel` 2021-11-26 12:38:52 -05:00
Gnome Ann 871ed65570 Remove an unnecessary `**maybe_low_cpu_mem_usage()` 2021-11-26 11:42:04 -05:00
Gnome Ann a93a76eb01 Load model directly in fp16 if using GPU or breakmodel 2021-11-26 10:55:52 -05:00
Gnome Ann 32e1d4a7a8 Enable `low_cpu_mem_usage` 2021-11-25 18:09:25 -05:00
Gnome Ann 25c9be5d02 Breakmodel support for GPTJModel 2021-11-25 18:09:16 -05:00
Gnome Ann f8bcc3411b In breakmodel mode, move layers to GPU as soon as model loads
Rather than during the first generation.
2021-11-25 11:44:41 -05:00
Gnome Ann cbb6efb656 Move TFS warper code into aiserver.py 2021-11-24 13:36:54 -05:00
henk717 a2c82bbcc8 num_layers fixes
As requested by VE_FORBRYDERNE (Possibly implemented it on to many places, needs testing but since the other one is already broken I am committing it first so I can more easily test)
2021-11-24 03:44:11 +01:00
henk717 d7a2424d2d No Half on CPU
Should fix CPU executions
2021-11-23 17:14:01 +01:00
Gnome Ann be0881a8d0 Use model.config.n_layer if model.config.num_layers doesn't exist 2021-11-23 10:09:24 -05:00
Gnome Ann 9b8bcb5516 Always convert soft prompt to float32 if using TPU backend
TPUs do not support float16. Attempting to use a float16 soft prompt
throws an error.
2021-11-21 18:22:10 -05:00
Gnome Ann e068aa9f26 Add soft prompt support to TPU backend 2021-11-21 18:08:04 -05:00
Gnome Ann df2768b745 Simplify the comment regex 2021-11-21 01:09:19 -05:00
Gnome Ann 7ab0d96b8a Change the comment regex again to use fixed-length lookbehind 2021-11-21 01:06:31 -05:00
Gnome Ann 624cfbd5a4 Use a smarter regex for comments
If the beginning of the comment is at the beginning of a line AND the
end of a comment is at the end of a line, an additional newline will now
be ignored so that the AI doesn't see a blank line where the comment
was.

For example, consider the following message:
```
Hello
<|This is
  a comment|>
World
```

The AI will now see this:
```
Hello
World
```

instead of this:
```
Hello

World
```
2021-11-21 00:42:57 -05:00
Gnome Ann a51f88aeb3 Also apply comment formatting to prompt in `refresh_story()` 2021-11-21 00:26:45 -05:00
Gnome Ann 1968be82bb Remove comments from prompt in WI processor and InferKit mode 2021-11-20 22:23:06 -05:00
Gnome Ann 8ce8e621ce Fix typo (one of the `comregex_ui` should be `comregex_ai`) 2021-11-20 22:19:12 -05:00
Gnome Ann c2ed31de28 Add syntax for comments <|...|> 2021-11-20 01:27:57 -05:00
Gnome Ann a65c4de840 Integrate TPU backend
This commit puts the TPU backend code directly in to the KoboldAI code
to make it easier to modify.
2021-11-19 18:06:57 -05:00
henk717 4a678deaa5
Merge branch 'KoboldAI:main' into united 2021-11-18 06:51:44 +01:00
henk717 b25c54cf91 Polishing and Optimizations
Multiple things have changed, for now models default to half mode even on the official transformers to make sure its as efficient on the GPU as finetune's. GPU selection is streamlined and cache files are now stored inside the KoboldAI folder (for the most part). A new command line parameter to force the models to run at their full size still needs to be added for the few users that would want a quality bump at the cost of ram.
2021-11-18 00:06:57 +01:00
henk717 27ee45b9cc
Merge pull request #31 from VE-FORBRYDERNE/cpu
Fix gen_in device logic in generate()
2021-11-17 22:42:31 +01:00
Gnome Ann 2f0b673b28 Fix gen_in device logic in generate() 2021-11-17 16:37:37 -05:00
henk717 e71271933a
Merge pull request #29 from VE-FORBRYDERNE/hidden-size
Fix hidden size detection for GPTJForCausalLM
2021-11-17 22:30:24 +01:00
Gnome Ann a1bc10246c Support for multiple gens per action with dynamic scan 2021-11-17 16:17:59 -05:00
Gnome Ann 98a72e34a4 Replace slashes in model name with underscores 2021-11-17 15:36:36 -05:00
Gnome Ann ab1a65f13a Fix hidden size detection for GPTJForCausalLM 2021-11-15 11:56:02 -05:00
Gnome Ann 17d07b280a Correct `gpu_layers` to `gpu_blocks` 2021-11-14 21:08:49 -05:00
Gnome Ann 805cb0c8b9 Make sure device_config() still works with all layers on CPU 2021-11-14 18:46:00 -05:00
Gnome Ann 80aee07816 Use old GPU-only generation if all layers are on the same GPU
Apparently, this mode uses less RAM than breakmodel does.
2021-11-14 18:42:18 -05:00
Gnome Ann b0ab30cec4 Re-enable GPU-only generation option 2021-11-14 18:24:51 -05:00
henk717 3e38b462c6 Hidden Size fix for GPT2 Custom
Replaced the JS Hidden Size load with the newer function to fix these models
2021-11-14 16:40:04 +01:00
henk717 ecea169553 Improved Unix Support
Changes the line-endings to the Unix format and sets KoboldAI to launch with Python3 if executed directly.

(cherry picked from commit 5b0977ceb6807c0f80ce6717891ef5e23c8eeb77)
2021-11-13 21:54:32 -05:00
henk717 1596a238f7 Breakmodel automation
The only changes are a small addition to the breakmodel section where GPU0 is automatically chosen if the CLI options are used without specifying breakmodel. Lineendings have been changed to Linux formatting for compatibility reasons.
2021-11-14 03:13:52 +01:00
henk717 8a916116e3
Remove device=0 because of incompatibility
Device=0 breaks some of the pytorch implementations, removed to restore hardware compatibility to 0.16 levels.
2021-11-14 02:33:27 +01:00
henk717 4bcffc614e
Allow directly running KoboldAI from CLI in Linux
Its made for Python3, so we assume python3 is installed in its usual location. If it isn't you can always run it yourself with whatever command you used prior to this change.
2021-11-14 01:57:43 +01:00
henk717 21ae45e9ab
Merge branch 'KoboldAI:main' into united 2021-11-11 17:05:39 +01:00
Gnome Ann 1fadcbe1e3 Send allowsp command on connect instead of on startup 2021-11-11 00:18:46 -05:00
Gnome Ann 2fe815e092 Don't broadcast emit calls inside do_connect()
This prevents the "thinking" animation from appearing on top of the
submit button under certain circumstances:

* When someone connects to the KoboldAI server while the model is
  generating (occurs after generation finishes)
* Occasionally, the browser may suddenly disconnect and reconnect from
  Flask-SocketIO during generation, which causes the same problem
2021-11-11 00:14:12 -05:00
Gnome Ann 11b0291bc4 Use model.transformer.embed_dim if model.transformer.hidden_size doesn't exist 2021-11-10 17:47:14 -05:00
Gnome Ann 752e19a2bb Fix vars.modeldim not always being set 2021-11-10 17:38:30 -05:00
Gnome Ann 2679df9664 Merge branch 'main' into united 2021-11-09 21:33:14 -05:00
henk717 c2371cf801
Merge pull request #23 from VE-FORBRYDERNE/scan-test
Dynamic world info scan
2021-11-10 03:31:42 +01:00
henk717 4af0d9dabd
Merge pull request #78 from VE-FORBRYDERNE/patch
Allow remote mode to load from client-side story files
2021-11-06 16:58:05 +01:00
Gnome Ann 02a56945de Version bump 2021-11-06 11:50:56 -04:00
henk717 bc0f9c8032 Allow remote mode to load from client-side story files
(cherry picked from commit a1345263df)
2021-11-06 11:48:20 -04:00
Gnome Ann 7c099fe93c Allow remote mode to load from client-side story files 2021-11-04 19:33:17 -04:00
Gnome Ann 81bd058caf Make sure calcsubmitbudget uses the correct reference to vars.actions 2021-11-03 18:57:02 -04:00
Gnome Ann a2d7735a51 Dynamic WI scanner should ignore triggers that are already in context 2021-11-03 18:55:53 -04:00
Gnome Ann ecfbbdb4a9 Merge branch 'united' into scan-test 2021-11-03 18:23:22 -04:00
Gnome Ann 0fa47b1249 Fix budget calculation for stories with at least one non-prompt chunk 2021-11-03 18:22:31 -04:00
Gnome Ann c11dab894e Put placeholder variables into calcsubmitbudget 2021-11-03 18:02:19 -04:00
Gnome Ann 9b18068999 Shallow copy story chunks when generating 2021-11-03 17:53:38 -04:00
Gnome Ann b8c3d8c12e Fix generator output having the wrong length 2021-11-03 16:10:12 -04:00
Gnome Ann 5b3ce4510f Make sure that soft_tokens is on the correct device 2021-11-03 16:07:50 -04:00
Gnome Ann 90fd5a538a Merge branch 'united' into scan-test 2021-11-03 12:42:18 -04:00
Gnome Ann fe2987d894 Fix missing break statement in device_config 2021-11-03 12:42:04 -04:00
Gnome Ann bd76ab333c Set numseqs to 1 if using dynamic world info scan 2021-11-03 12:28:17 -04:00
Gnome Ann 0a91ea27b3 Make the dynamic world info scan toggleable 2021-11-03 12:18:48 -04:00
Gnome Ann de3664e73c Add an assertion for the value of already_generated 2021-11-03 12:01:45 -04:00
Gnome Ann ec8ec55256 Dynamic world info scan 2021-11-03 11:54:48 -04:00
henk717 aa998ba5e9
Merge pull request #20 from VE-FORBRYDERNE/sp
Soft prompt support for PyTorch models
2021-10-30 00:35:44 +02:00
Gnome Ann 206c01008e Fix budget calculation when using soft prompt 2021-10-29 11:44:51 -04:00
henk717 c9c370aa17
Merge branch 'KoboldAI:main' into united 2021-10-28 23:29:29 +02:00
Gnome Ann bf4e7742ac Patch GPTJForCausalLM, if it exists, to support soft prompting 2021-10-28 17:18:28 -04:00
Gnome Ann 40b4631f6c Clamp input_ids in place
Apparently transformers maintains an internal reference to input_ids
(to use for repetition penalty) so we have to clamp the internal
version, too, because otherwise transformers will throw an out-of-bounds
error upon attempting to access token IDs that are not in the
vocabulary.
2021-10-28 16:52:39 -04:00
Gnome Ann 24d5d63c9f Use the correct generation min and max when using soft prompt 2021-10-28 16:39:59 -04:00
Gnome Ann 511817132a Don't change the shape of transformer.wte 2021-10-28 15:39:59 -04:00
Gnome Ann a1ae11630a Make sure to cast vars.sp to the correct dtype 2021-10-28 13:22:07 -04:00
Gnome Ann 1556bd32a5 Use torch.where to inject the soft prompt instead of torch.cat 2021-10-28 13:20:14 -04:00
Gnome Ann 248e0bd24b Fix soft prompt loading code 2021-10-28 00:29:42 -04:00
Gnome Ann 4e3cc93020 Merge branch 'united' into sp 2021-10-23 11:45:03 -04:00
henk717 7b73d7cfdd Single Line Mode
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.

This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
Gnome Ann 1f449a9dda Soft prompt support (6B Colabs not supported yet) 2021-10-22 14:18:10 -04:00
Gnome Ann 3501f03153 Create settings directory if it doesn't exist when using InferKit/OAI 2021-10-21 23:33:32 -04:00
henk717 fa0f8af1d6
Merge branch 'KoboldAI:main' into united 2021-10-15 08:23:06 +02:00
henk717 9513240dfb
Version bump
Since VE fixed important things in the editor i want users to be able to see this easier
2021-10-15 08:22:32 +02:00
henk717 c854a62549 Clarified GPU Layers
breakmodel_layers and layers is confusing, changed the new method to breakmodel_gpulayers. The old one should no longer be used by people, but since it works in reverse we leave it in so scripts don't break.
2021-10-06 18:55:01 +02:00
henk717 bd063f7590
Merge pull request #19 from VE-FORBRYDERNE/multi-gpu
Multiple GPU support
2021-10-06 18:50:58 +02:00
henk717 82c7eaffb5
Merge branch 'KoboldAI:main' into united 2021-10-06 00:26:08 +02:00
henk717 8893916fef
Don't always submit prompt by default
Feedback from users is that its better to not always submit the prompt, this is consistent with the randomly generated stories. You can always toggle it on if you need this for coherency. This change does not override existing user settings.
2021-10-06 00:25:05 +02:00
Gnome Ann aa59f8b4b2 Fix CPU layers not displaying correctly when using --layers 2021-10-05 11:29:47 -04:00
Gnome Ann 91352ea9f1 Change the command line flags for breakmodel 2021-10-05 11:22:09 -04:00
Gnome Ann a1e4405aa6 Automatically use breakmodel instead of GPU-only where supported
There's really no reason to use GPU-only mode if breakmodel is supported
because breakmodel can run in GPU-only mode too.
2021-10-05 10:36:51 -04:00
Gnome Ann fb90a7ed17 Change the help text for breakmodel to be more helpful 2021-10-05 10:31:28 -04:00
Gnome Ann 231621e7c2 Use AutoModelForCausalLM for custom models with a model_type 2021-10-05 09:45:12 -04:00
Gnome Ann a283d34b27 Multiple GPU support 2021-10-05 09:38:57 -04:00
Gnome Ann a42b580027 Merge branch 'united' into multi-gpu 2021-10-02 11:44:26 -04:00
henk717 dab58d8393
Merge branch 'KoboldAI:main' into united 2021-09-29 17:05:06 +02:00
Gnome Ann a179bb2820 Bump version number to 1.16.2 2021-09-28 21:50:33 -04:00
Gnome Ann e6cd28243e Scroll to the bottom of the gamescreen after retrying 2021-09-28 21:34:36 -04:00
Gnome Ann bb323152d7 Disable vars.recentedit again 2021-09-28 21:24:08 -04:00
Gnome Ann 2b89bcb16e Fix random story generator 2021-09-28 21:04:26 -04:00
Gnome Ann af93c96c0f Submit Action mode action in Story mode if action is empty 2021-09-28 19:50:00 -04:00
Gnome Ann 9ab1d182ac Guard against empty prompts 2021-09-28 19:48:43 -04:00
henk717 da55ed3b49
Merge branch 'KoboldAI:main' into united 2021-09-28 10:41:01 +02:00
Gnome Ann 03c1a3ebf9 Put vars.recentedit = True in deleterequest() for consistency 2021-09-28 01:10:20 -04:00
Gnome Ann 97e1760af5 Prevent retry from popping chunks after edit/delete 2021-09-28 01:07:11 -04:00
Gnome Ann 231290608d Do a better job of preventing editing of text when required 2021-09-28 00:48:37 -04:00
Gnome Ann 13b81c7523 Prevent the user from deleting the prompt 2021-09-27 22:21:14 -04:00
henk717 01b30b315f
Merge branch 'KoboldAI:main' into united 2021-09-28 02:31:20 +02:00
Gnome Ann e29e7b11ec Bump version number to 1.16.1 2021-09-27 18:12:15 -04:00
Gnome Ann a327eed2c3 Fix editor scrolling issues 2021-09-27 17:44:22 -04:00
henk717 01339f0b87
Merge branch 'KoboldAI:main' into united 2021-09-25 17:44:51 +02:00
Gnome Ann 5893e495b6 Change AutoModel to AutoModelForCausalLM
This fixes breakmodel mode for the official models from the model
selection menu.
2021-09-25 11:41:15 -04:00
henk717 c9290d02dc Update aiserver.py
Better way of checking for the model type
2021-09-25 16:50:24 +02:00
henk717 7d35f825c6 Huggingface GPT-J Support
Finetune's fork has unofficial support which we supported, but this is not compatible with models designed for the official version. In this update we let models decide which transformers backend to use, and fall back to Neo if they don't choose any. We also add the 6B to the menu and for the time being switch to the github version of transformers to be ahead of the waiting time. (Hopefully we can switch back to the conda version before merging upstream).
2021-09-25 16:26:17 +02:00
Gnome Ann 4d9eab3785 K80 test 2021-09-23 20:57:18 -04:00
henk717 6520cac75d
Support models that are formatted with CRLF
A new model was released that uses a different formatting for its enters, this causes to many enters in the UI. In this change we fix the issue so that when this happens the UI still displays the content as you would expect. Removing the formatting burden from the Model developers.
2021-09-22 00:34:05 +02:00
henk717 30a7e945a1
Merge pull request #18 from VE-FORBRYDERNE/doc
Correct misindicated model VRAM requirements
2021-09-21 18:54:19 +02:00
henk717 dd1c3ab67e Allow models to set formatting defaults
Originally omitted when model settings were forced. Now that models can only define the defaults for KoboldAI its a good idea to give model authors control over what formatting they think works best for their models.
2021-09-21 15:46:54 +02:00
Gnome Ann bbf2bd4026 Correct misindicated model VRAM requirements 2021-09-20 18:49:17 -04:00
Gnome Ann 8df2ccae5b Update client-side story name when saving
If you save a story as a different name than it was loaded with, and
then try to download it as JSON/plaintext, the downloaded file's name
will now match the new story name.
2021-09-19 23:40:52 -04:00
Gnome Ann 99d2ce6887 Don't broadcast getanote and requestwiitem
This prevents duplicate submissions when multiple people are connected
to the same server and one person submits changes to memory, author's
note or world info, by pressing Submit (for author's note or memory) or
Accept (for world info).
2021-09-19 17:00:14 -04:00
Gnome Ann da03360e92 Fix filename/memory/AN not syncing when downloading in some cases 2021-09-19 14:46:30 -04:00
Gnome Ann b5883148a5 Download Story as JSON/Plaintext no longer requires server 2021-09-19 11:41:37 -04:00
henk717 b264823fed More polishing
Improved the default settings, better distinction on client / server. The python parts have been renamed to server, the browser to the client to be conform what you'd expect from a client and a server. The model name will also be shown now instead of NeoCustom.
2021-09-18 21:50:23 +02:00
henk717 1df051a420 Settings per Model
Models can no longer override client settings, instead settings are now saved on a model per model basis with the settings provided by the model being the default. Users can also specify the desired configuration name as a command line parameter to avoid conflicting file names (Such as all Colabs having Colab.settings by default).
2021-09-18 21:18:58 +02:00
henk717 fbd07d82d7 Allow models to override some settings
Many models have that one setting that just work best, like repetition penalty 2 or 1.2 while being incompatible with existing settings. Same applies for Adventure mode on or off. With this change models are allowed to override user preferences but only for the categories we deem this relevant (We don't want them to mess with things like tokens, length, etc). For users that do not want this behavior this can be turned off by changing msoverride to false in the client.settings.

Model creators can specify these settings in their config.json with the allowed settings being identical to their client.settings counterparts.
2021-09-18 18:08:50 +02:00
henk717 a651400870 Readme improvements, badwords replacement
Bit of a workaround for now, but the [ badwords search routine has been replaced with a hardcoded list used by the colabs. This is far more effective at filtering out artifacts when running models locally. We can get away with this because all known models use the same vocab.json, in the future we will probably want to load this from badwords.json if present so model creators can bundle this with the model.
2021-09-18 02:16:17 +02:00
henk717 753177a87e Further Readme Progress
More model descriptions, the beginning of the downloadable model section. Lacks download links for now.
2021-09-17 17:59:17 +02:00
henk717 6668bada47 New documentation
Replaces the placeholder readme with a proper one, the menu is also updated and reorganized to encourage users to use custom models and to better reflect the real world VRAM requirements.
2021-09-02 14:04:25 +02:00
Gnome Ann 24d57a7ac3 Clip off ".json" from story name when downloading 2021-09-01 14:07:56 -04:00
Gnome Ann 7e1b1add11 Don't import breakmodel until it's actually needed
breakmodel imports torch which takes a long time to import.
We should delay the importing of torch as long as possible.
2021-09-01 14:04:37 -04:00
Gnome Ann 6bd6415749 Prevent remote-mode-forbidden actions server-side
Since some user interface buttons are disabled while in --remote mode,
they should also be disabled in aiserver.py so a malicious user can't
manually send those commands to the server.
2021-09-01 13:55:25 -04:00
Gnome Ann 8ae9304cda Clean up code for saving story as plaintext 2021-09-01 13:49:04 -04:00
Gnome Ann 543acf9ba4 Also allow downloading stories as plaintext 2021-09-01 13:46:37 -04:00
Gnome Ann fab51b64a3 Don't leave memory mode when downloading 2021-09-01 13:36:05 -04:00
Gnome Ann b5d9aaf785 Remember to actually import "Response" from flask 2021-09-01 13:31:05 -04:00
Gnome Ann 4e9b371564 Merge branch 'united' into story-manager 2021-09-01 13:25:28 -04:00
Gnome Ann 16184ceee8 Catch and display errors from "Save As" 2021-09-01 12:58:01 -04:00
henk717 4151fd1b6a Save story in plain text along the save
Not just saving in .json but also in plain text, should help story writers get their stories out more easily. Especially since they can technically add some markdown into their stories manually in the interface.
2021-09-01 17:41:18 +02:00
henk717 9b3e298089
Foundation for in browser downloading
This adds /download as a URL to immediately download the file, this will allow html changes that initiate a file download.
2021-09-01 15:58:56 +02:00
Gnome Ann c276220a35 Allow deleting and renaming stories in the browser 2021-08-31 18:22:30 -04:00
Gnome Ann 63a4048053 Remove typing.Literal (a Python 3.8+ feature) 2021-08-26 15:38:58 -04:00
Gnome Ann 75c68c2b78 Fix world info depth being ignored 2021-08-26 12:50:17 -04:00
Gnome Ann d7605a717b Merge branch 'united' into big-o
This resolves two merge conflicts that arose because this branch was
a few commits behind.
2021-08-26 01:37:40 -04:00
Gnome Ann 8fd8612cca Adventure mode colouring now controlled by a CSS class
So that we can just toggle the class instead of having aiserver.py send
back the entire story.
2021-08-26 01:06:57 -04:00
Gnome Ann 27c7baab92 Prevent some errors when the prompt is the only chunk 2021-08-25 23:58:12 -04:00
Gnome Ann b0d64985bb Fix Retry and Back buttons popping the wrong chunk 2021-08-25 19:56:57 -04:00
henk717 bbd5bd0cd7
Merge pull request #8 from VE-FORBRYDERNE/misc
General usability fixes
2021-08-26 01:56:42 +02:00
Gnome Ann 796f5ffd05 Make vars.actions a dictionary instead of a list 2021-08-25 19:28:26 -04:00
Gnome Ann 6dcd7888c8 Change "recieved" to "received" 2021-08-25 14:55:26 -04:00
Gnome Ann c3528e6221 Retry after Back no longer pops an extra story chunk 2021-08-25 14:54:51 -04:00
Gnome Ann cf677c60fc Stability fixes for back/retry re genseqs/useprompt
* Back and Retry buttons no longer pop a story chunk while in the
  "Select sequence to keep" menu
* No longer freezes if you retry with no story chunks beyond the initial
  prompt chunk
* When "Always Add Prompt" is on, allow Retry even if the prompt is the
  only chunk in the story
* Added error messages for Back and Retry buttons
2021-08-25 14:42:37 -04:00
henk717 d848d03d60
Merge pull request #7 from VE-FORBRYDERNE/wi-constant
Constant world info keys
2021-08-25 20:14:19 +02:00
henk717 3da0c3d24a Remote improvements
Some colab's use KoboldAI as a subprocess, rather than making that to complicated for Colab developers its better to just dump the Cloudflare link to a log, in addition to showing the message on screen. That way if KoboldAI itself gets filtered you can easily cat the link or use the existing link grabbing methods.
2021-08-25 13:57:38 +02:00
Gnome Ann b1c6aee8d3 Integrate inline chunk editor and Adventure mode with Javalar's branch 2021-08-24 19:02:52 -04:00
Gnome Ann 735fc9431b Still HTML-escape chunks if Adventure is off
(cherry picked from commit 3409d8c12e3fbb1e3232f2df82740b012e8f3604)
2021-08-24 18:46:34 -04:00
Gnome Ann 09030573e5 Broadcast updatechunk and removechunk 2021-08-24 18:40:12 -04:00
Gnome Ann 6d5845ff8d Merge https://github.com/KoboldAI/KoboldAI-Client/pull/45 into big-o 2021-08-24 17:27:50 -04:00
Gnome Ann 2a7c6244cb Constant world info keys 2021-08-24 13:45:20 -04:00
Gnome Ann 90e558cf3f Won't freeze anymore if you delete the prompt 2021-08-24 11:24:29 -04:00
henk717 f0962155b8
Merge pull request #5 from VE-FORBRYDERNE/editable-chunks
Scroll down on submit
2021-08-24 01:22:57 +02:00
Gnome Ann 13ce16b859 Scroll down on submit 2021-08-23 19:19:36 -04:00
henk717 c108e080bf Various Fixes
Various Fixes, mostly to make the UI play a little nicer in the new edit mode. Also reverted and optimized some of the setting stuff.
2021-08-24 01:18:09 +02:00
Gnome Ann 3bf467e63c Added dedicated inline editing commands to aiserver.py
It's a lot faster now.
2021-08-23 18:52:45 -04:00
henk717 a151e1a33a Small fix for Authors Notes in multiplayer
Multiplayer support was causing all players to automatically submit authors notes. This is now fixed only the person submitting the authors notes counts.
2021-08-22 15:54:35 +02:00
henk717 09ec15c91b
Merge pull request #3 from VE-FORBRYDERNE/breakmodel
Low VRAM patch
2021-08-21 21:03:46 +02:00
Gnome Ann 3c9ce2c541 Use torch.no_grad() and more garbage collection 2021-08-21 12:15:31 -04:00
Gnome Ann fae15b8a17 Fix typo in previous commit 2021-08-21 10:54:57 -04:00
Gnome Ann a8bbfab87a Actually use args.breakmodel_layers 2021-08-20 20:50:03 -04:00
Gnome Ann e00d9c4362 breakmodel fix for models without lm_head 2021-08-20 19:32:18 -04:00
Gnome Ann 8bfcf86a8b Fix for non-rotary models without "rotary" in config.json 2021-08-20 13:00:53 -04:00
henk717 68836728d4 Update World Info on Submit
Still VERY far from ideal for multiplayer, only one person can realistically edit it at a time. Whoever submits counts. Will need more major interface changes so things can be submitted one by one. But hey, it works and its good enough for a group of friends to play the game :D
2021-08-20 17:51:49 +02:00
Gnome Ann 56c9dc2c04 Fix "Expected all tensors to" on non-rotary models
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select)
2021-08-20 11:34:31 -04:00
Gnome Ann 5f82e5ba0d Also clear CUDA cache twice if using breakmodel 2021-08-20 11:17:34 -04:00
Gnome Ann f986c65a4e Manually strip and decode tokens if not using a pipeline 2021-08-20 11:15:32 -04:00
Gnome Ann 7717168676 Only allow --breakmodel if it's supported 2021-08-20 10:52:57 -04:00
Gnome Ann b1c13f832a Implement arrmansa's low VRAM patch 2021-08-20 10:25:03 -04:00
henk717 f12e3576a8 Multiple Browser Session Syncing
Multiplayer anyone? :D
2021-08-20 15:32:02 +02:00
henk717 dd77ac2f3a GPU detection bugfix 2021-08-20 12:30:52 +02:00
henk717 99c5ff240c Command Line Part 2
Automated Colab and GPU selection
2021-08-20 11:39:04 +02:00
henk717 ba20c3407c Command line support
Added command line options for model selection, this makes it usable inside Google Colab or other unattended servers people might want to use/provide.
2021-08-20 10:49:35 +02:00
henk717 136dd71171 Added --remote Mode
First step towards native Colab support, built in Cloudflare tunnels easily allows players to play KoboldAI on another device. This mode also removes buttons that would get you stuck if you have no local PC access.
2021-08-20 00:37:59 +02:00
henk717 72bfc417da top_k and tfs support by Frogging101
Adds top_k and tfs support, also fixes a SocketIO error.
2021-08-19 14:47:57 +02:00
henk717 33215a87b3 Added VE_FORBRYDERNE's Selective World Info
This update allows you to selectively choose when World Info is loaded for more control and RAM savings.
2021-08-19 13:48:33 +02:00
henk717 00414d26e2 Integrated VE_FORBRYDERNE's Adventure Mode + Cleanup
Adventure Mode allows you to play this like AID, perfect for Choose your own Adventure models
2021-08-19 13:18:01 +02:00
henk717 efbe40f1f6 Random Story Generator
Add the Random Story Generator and more userfriendly defaults
2021-08-19 12:54:44 +02:00
Yves Dubois 81aba7cba8 Fix typo 2021-06-15 01:02:11 -04:00
Javalar 9559347f82
Update or remove targeted chunks in Game Screen (#2) 2021-06-15 00:59:08 -04:00
Ralf e9b62cd3ae escape the prompt too 2021-06-02 21:23:36 +02:00
KoboldAI Dev 1e95f7e1a5 Hotfix: HTML escaped story output. Shodan can no longer run JS popups in your browser. 2021-06-02 09:01:13 -04:00
Yves Dubois 4cb3df5e7e Performance increase for `refresh_story` on large stories 2021-05-29 21:36:24 -04:00
KoboldAI Dev 65ad0e01e3 Bugfix for InferKit submit failing when starting new story. 2021-05-29 20:43:30 -04:00
KoboldAI Dev bed1eba6eb Added option to generate multiple responses per action.
Added ability to import World Info files from AI Dungeon.
Added slider for setting World Info scan depth.
Added toggle to control whether prompt is submitted each action.
Added 'Read Only' mode with no AI to startup.
Fixed GPU/CPU choice prompt appearing when GPU isn't an option.
Added error handling to generator calls for CUDA OOM message
Added generator parameter to only return new text
2021-05-29 05:46:03 -04:00
KoboldAI Dev f9bbb174a6 Added OpenAI API support
Added in-browser Save/Load/New Story controls
(Force a full refresh in your browser!)
Fixed adding InferKit API key if client.settings already exists
Added cmd calls to bat files so they'll stay open on error
Wait animation now hidden on start state/restart
2021-05-22 05:28:40 -04:00
KoboldAI Dev 4996e0ff46 Bugfixes:
Improvements to pruning context from text returned from the AI
Colab errors should no longer throw JSON decode errors in client
Improved logic for World Info scanning
Fix for index error in addsentencespacing
2021-05-18 17:59:59 -04:00
KoboldAI Dev 3d070f057e Bugfixes:
Expanded bad_word flagging for square brackets to combat Author's Note leakage
World Info should now work properly if you have an Author's Note defined
Set generator to use cache to improve performance of custom Neo models
Added error handling for Colab disconnections
Now using tokenized & detokenized version of last action to parse out new content
Updated readme
2021-05-17 20:28:18 -04:00
ioncorimenia 0e855ef1d8 Catch some edge cases when importing 2021-05-17 16:00:32 +02:00
KoboldAI Dev 95cb94e979 Compatability update for latest AIDCAT export format 2021-05-16 17:45:21 -04:00
KoboldAI Dev ce2e4e1f9e Switched aidg.club import from HTML scrape to API call
Added square bracket to bad_words_ids to help suppress AN tag from leaking into generator output
Added version number to CSS/JS ref to address browser loading outdated versions from cache
2021-05-16 14:53:19 -04:00
KoboldAI Dev 47f1f7a85b Corrected requests import location for aidg.club support 2021-05-16 05:37:38 -04:00
KoboldAI Dev b05a73a04f Added ability to import aidg.club scenarios
Changed menu bar to bootstrap navbar to allow for dropdown menus
2021-05-16 05:29:39 -04:00
KoboldAI Dev 2cef3bceaf Bugfix for save function not appending .json extension by default
Bugfix for New Story function not clearing World Info from previous story
Torch will not be initialized unless you select a local model, as there's no reason to invoke it for InferKit/Colab
Changed JSON file writes to use indentation for readability
2021-05-15 19:29:41 -04:00
KoboldAI Dev 429c9b13f5 Bug fixes for AIDCAT import issues.
Modified CSS to prevent Import dialog from expanding off the page.
Updated readme with Colab link.
2021-05-14 16:27:47 -04:00
KoboldAI Dev 5d53f1a676 It helps if you commit all the files in the bugfix 2021-05-14 02:39:36 -04:00
KoboldAI Dev c9b6f89d1d Hotfix for Google Colab generator call failing if when called from a fresh prompt. 2021-05-13 23:30:54 -04:00
KoboldAI Dev 3c0638bc73 Added support for running model remotely on Google Colab 2021-05-13 18:58:52 -04:00
KoboldAI Dev 0b113a75b4 Hotfix for tokenizer modifying spaced ellipses and breaking new text recognition. 2021-05-13 09:35:11 -04:00
KoboldAI Dev c0736a8ec7 Added World Info
Added additional punctuation triggers for Add Sentence Spacing format
Added better screen reset logic when refresing screen or restarting server
2021-05-13 01:26:42 -04:00
KoboldAI Dev fff77c5a88 Hotfix for top_p parameter in generator call 2021-05-11 14:16:34 -04:00
KoboldAI Dev 1cc069a779 Added ability to import AIDungeon games from AIDCAT 2021-05-11 00:27:34 -04:00
KoboldAI Dev b55266a7c8 Added Formatting options
Added Bootstrap toggle library for UI
Added injection points for input/output modification
2021-05-10 19:17:10 -04:00
KoboldAI Dev 0e0947d93a Bugfix: Add apikey check to loadsettings 2021-05-10 09:33:41 -04:00
KoboldAI Dev 739a8a5268 Bugfix: Check for keys in client.settings before attempting to access 2021-05-10 09:26:31 -04:00
KoboldAI Dev ba1ba0fc8a Reduced default max_length parameter to 512.
Added warning about VRAM usage to Max Tokens tooltip.
2021-05-07 19:04:51 -04:00
KoboldAI Dev d632976fbf Settings menu modularized.
Help text added to settings items.
Settings now saved to client file when changed.
Separated transformers settings and InferKit settings.
Reorganized model select list.
2021-05-07 14:32:10 -04:00
KoboldAI Dev a27d5beb36 Replaced easygui with tkinter to address file prompts appearing beneath game window
Removed easygui from requirements.txt
Save directory is no longer stored in save file for privacy
2021-05-05 11:18:24 -04:00
KoboldAI Dev 229b10cb91 Added support for Author's Note
Increased input textarea height
Removed generator options from save/load system
Set output length slider to use steps of 2
2021-05-05 03:04:06 -04:00
KoboldAI Dev 0ce77f4875 Fixed InferKit API requests sending a default top_p value instead of the user-selected value 2021-05-04 11:48:24 -04:00
KoboldAI Dev 6e54f654d6 Added settings menu to adjust generator parameters from game UI
Fixed text scrolling when content exceeded game screen height
2021-05-04 09:56:48 -04:00
KoboldAI Dev 1c9c219251 Added support for selecting custom trained models 2021-05-04 01:47:23 -04:00
KoboldAI Dev 734b0b54d4 Added VRAM requirements info to model list
Added ability to opt for CPU gen if you have GPU support
Added better error checking to model selection
2021-05-03 15:19:03 -04:00
KoboldAI Dev 1214062292 Fixed Max Length limits not being enforced for transformers & InferKit 2021-05-03 13:57:27 -04:00
KoboldAI Dev ace2b2db12 Fixing GPU support broke CPU support. Now testing for capabilities before creating pipeline 2021-05-03 00:24:16 -04:00
KoboldAI Dev 97ad42efe6 Added device selection to transformers pipeline request to utilize GPU for inference 2021-05-02 23:34:33 -04:00
KoboldAI 7476163494
Initial Upload 2021-05-02 18:46:45 -04:00