Commit Graph

394 Commits

Author SHA1 Message Date
henk717 00a0cea077 Update aiserver.py 2021-12-24 23:15:01 +01:00
henk717 1d3370c995 Community model
First batch, will be more, we will also need to update the other VRAM display's with the changes that have happened. Will happen depending on how the 8-bit stuff goes.
2021-12-24 23:12:14 +01:00
henk717 6952938e88 Update aiserver.py
Crash fix
2021-12-24 19:57:08 +01:00
Gnome Ann b2def30d9d Update Lua modeltype and model API 2021-12-23 16:35:52 -05:00
Gnome Ann c305076cf3 Fix the behaviour of the `softprompt` setting 2021-12-23 16:14:09 -05:00
Gnome Ann 00f611b207 Save softprompt filename in settings 2021-12-23 13:02:11 -05:00
Gnome Ann 924c48a6d7 Merge branch 'united' into gui-and-scripting 2021-12-23 13:02:01 -05:00
henk717 cae0f279e2 Restore Lowmem
Accidentally got replaced in one of my test runs
2021-12-23 18:50:01 +01:00
henk717 25a6e489c1 Remove Replace from Huggingface
Accidentally ended up in the wrong section, for downloads we do not replace anything only afterwards.
2021-12-23 17:27:09 +01:00
henk717 e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717 9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717 be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
Gnome Ann 8452940597 Merge branch 'united' into gui-and-scripting 2021-12-23 00:18:11 -05:00
Gnome Ann 2a4d7448be Make `dynamic_processor_wrap` execute warper conditionally
The top-k warper doesn't work properly with an argument of 0, so there
is now the ability to not execute the warper if a condition is not met.
2021-12-22 23:46:25 -05:00
Gnome Ann 7e06c25011 Display icons for active userscripts and softprompts
Also fixes the userscript menu so that the active userscripts preserve
the previously selected order as was originally intended.
2021-12-22 23:33:27 -05:00
henk717 a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717 4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717 2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717 81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
Gnome Ann c549ea04a9 Always use all logit warpers
Now that the logit warper parameters can be changed mid-generation by
generation modifiers, the logit warpers have to be always on.
2021-12-22 17:29:07 -05:00
Gnome Ann 1e1b45d47a Add support for multiple library paths in bridge.lua 2021-12-22 14:24:31 -05:00
Gnome Ann fc04ff3a08 World info folders can now be collapsed by clicking on the folder icon 2021-12-22 13:12:35 -05:00
Gnome Ann d538782b1e Add individual configuration files for userscripts 2021-12-22 02:59:31 -05:00
Gnome Ann 380b54167a Make transformers warpers dynamically update their parameters
So that if you change, e.g., `top_p`, from a Lua generation modifier or
from the settings menu during generation, the rest of the generation
will use the new setting value instead of retaining the settings it had
when generation began.
2021-12-21 22:12:24 -05:00
Gnome Ann caef3b7460 Disable `low_cpu_mem_usage` when using GPT-2
Attempting to use transformers 4.11.0's experimental `low_cpu_mem_usage`
feature with GPT-2 models usually results in the output repeating a
token over and over or otherwise containing an incoherent response.
2021-12-20 19:54:19 -05:00
henk717 7b56940ed7
Merge pull request #47 from VE-FORBRYDERNE/scripting
Lua API fixes
2021-12-20 04:32:25 +01:00
Gnome Ann 341b153360 Lua API fixes
* `print()` and `warn()` now work correctly with `nil` arguments
* Typo: `gpt-neo-1.3M` has been corrected to `gpt-neo-1.3B`
* Regeneration is no longer triggered when writing to `keysecondary` of
  a non-selective key
* Handle `genamt` changes in generation modifier properly
* Writing to `kobold.settings.numseqs` from a generation modifier no
  longer affects
* Formatting options in `kobold.settings` have been fixed
* Added aliases for setting names
* Fix behaviour of editing story chunks from a generation modifier
* Warnings are now yellow instead of red
* kobold.logits is now the raw logits prior to being filtered, like
  the documentation says, rather than after being filtered
* Some erroneous comments and error messages have been corrected
* These parts of the API have now been implemented properly:
    * `compute_context()` methods
    * `kobold.authorsnote`
    * `kobold.restart_generation()`
2021-12-19 20:18:28 -05:00
Gnome Ann 6aba869fb7 Make sure uninitialized WI entries are given UIDs when loading saves 2021-12-18 18:00:06 -05:00
Gnome Ann 769333738d Fix behaviour of `kobold.outputs` with read-only and no prompt gen 2021-12-17 12:59:01 -05:00
henk717 6d9063fb8b No Prompt Gen
Allow people to enter a prompt without generating anything by the AI. Combined with the always add prompt this is a very useful feature that allows people to write world information first, and then do a specific action. This mimics the behavior previously seen in AI Dungeon forks where it prompts for world information and then asks an action and can be particularly useful for people who want the prompt to always be part of the generation.
2021-12-16 12:47:44 +01:00
henk717 f3b4ecabca
Merge pull request #44 from VE-FORBRYDERNE/patch
Fix an error that occurs when all layers are on second GPU
2021-12-16 01:43:03 +01:00
henk717 e3d9c2d690 New download machanism
Automatically converts Huggingface cache models to full models on (down)load.
WARNING: Does wipe old cache/ dir inside the KoboldAI folder, make a backup before you run these models if you are bandwith constraint.
2021-12-16 01:40:04 +01:00
Gnome Ann 19d2356253 Fix an error that occurs when all layers are on second GPU 2021-12-15 19:03:49 -05:00
henk717 5e3e3f3578 Fix float16 models
Forcefully convert float16 models to work on the CPU
2021-12-16 00:31:51 +01:00
Gnome Ann 9097aac4a8 Show full stack trace for generator errors to help in diagnosing errors 2021-12-15 02:03:08 -05:00
Gnome Ann 2687135e05 Fix a strange bug where max tokens was capped at 1024
This seems to be related to the model config files, because only certain
models have this problem, and replacing ALL configuration files of a
"bad" model with those of a "good" model of the same type would fix the
problem.

Shouldn't be required anymore.
2021-12-15 00:45:41 -05:00
Gnome Ann 1551c45ba4 Prevent dynamic scanning from generating too many tokens 2021-12-14 23:39:04 -05:00
Gnome Ann 629988ce13 Fix a problem with the Lua regeneration API
It was an egregious typo that caused tokens to be rearranged on
regeneration.
2021-12-14 23:04:03 -05:00
henk717 6670168a47 Update aiserver.py 2021-12-14 16:26:23 +01:00
Gnome Ann 6e6e0b2b4d Allow Lua to stop generation from input modifier 2021-12-13 19:32:01 -05:00
Gnome Ann e9ed8602b2 Add a "corescript" setting 2021-12-13 19:28:33 -05:00
Gnome Ann e5bb20cc8f Fix Lua regeneration system 2021-12-13 19:17:18 -05:00
Gnome Ann 462040ed6f Restore missing `loadsettings()` call 2021-12-13 18:39:33 -05:00
Gnome Ann 661cca63e8 Make sure stopping criteria still work with dynamic scan off 2021-12-13 18:10:51 -05:00
Gnome Ann 338d437ea3 Use eventlet instead of gevent-websocket 2021-12-13 17:19:04 -05:00
Gnome Ann 34c52a1a23 Remove escape characters from all error messages 2021-12-13 11:47:34 -05:00
Gnome Ann 11f9866dbe Enable more of the IO library in Lua sandbox
Also changes the Lua warning color to red.
2021-12-13 11:22:58 -05:00
Gnome Ann 28e86563b8 Change `self.scores` to `scores` in aiserver.py 2021-12-13 11:18:01 -05:00
Gnome Ann 82e149ee02 Catch Lua errors properly 2021-12-13 02:32:09 -05:00
Gnome Ann 5f06d20085 Format Lua printed messages and warnings 2021-12-13 01:59:53 -05:00
Gnome Ann d2f5544468 Add Userscripts menu into GUI 2021-12-13 01:03:26 -05:00
Gnome Ann 5d13339a52 Allow the retry button to call the Lua scripts properly 2021-12-12 20:48:10 -05:00
Gnome Ann 39bfb0862a Allow user input to be modified from Lua
Also adds some handlers in the Lua code for when the game is not started
yet
2021-12-12 20:44:03 -05:00
Gnome Ann fbf3e7615b Add API for generated tokens and output text 2021-12-12 19:27:20 -05:00
Gnome Ann ceabd2ef7b Add Lua API for editing logits during generation
TPU backend not supported yet.
2021-12-12 16:18:45 -05:00
Gnome Ann e2c3ac041b Complete the Lua generation halting API 2021-12-12 12:52:03 -05:00
Gnome Ann d76dd35791 Add Lua API for reading model information 2021-12-12 12:09:59 -05:00
Gnome Ann 00eb125ad0 Allow Lua API to toggle dynamic scan 2021-12-12 01:55:46 -05:00
Gnome Ann 5692a7dfe2 Add Lua API for reading the text the user submitted to the AI 2021-12-12 01:52:42 -05:00
Gnome Ann 03453c4e27 Change script directory tree
Userscripts have been moved from /scripts/userscripts to /userscripts.

Core scripts have been moved from /scripts/corescripts to /cores.
2021-12-11 23:46:30 -05:00
Gnome Ann 36209bfe69 Add Lua API for story chunks 2021-12-11 23:44:07 -05:00
Gnome Ann 8e6a62259e Fix the Lua tokenizer API 2021-12-11 21:24:34 -05:00
Gnome Ann 67974947b2 Fix numerous problems in the Lua world info API 2021-12-11 19:11:38 -05:00
Gnome Ann 3327f1b471 Fix Lua settings API 2021-12-11 17:01:41 -05:00
Gnome Ann f8aa578f41 Enable generation modifiers for transformers backend only 2021-12-11 16:28:25 -05:00
Gnome Ann e289a0d360 Connect bridge.lua to aiserver.py
Also enables the use of input modifiers and output modifiers, but not
generation modifiers.
2021-12-11 12:45:45 -05:00
Gnome Ann 35966b2007 Upload bridge.lua, default.lua and some Lua libs
base64
inspect
json.lua
Lua-hashings
Lua-nums
Moses
mt19937ar-lua
Penlight
Serpent
2021-12-10 19:45:57 -05:00
Gnome Ann 683bcb824f Merge branch 'united' into world-info 2021-12-05 13:06:32 -05:00
Gnome Ann 6d8517e224 Fix some minor coding errors 2021-12-05 11:39:59 -05:00
Gnome Ann 150ce033c9 TPU backend no longer needs to recompile after changing softprompt 2021-12-05 02:49:15 -05:00
Gnome Ann b99ac92a52 WI folders and WI drag-and-drop 2021-12-04 23:59:28 -05:00
henk717 44d8068bab Ngrok Support
Not recommended for home users due to DDoS risks, but might make Colab tunnels more reliable.
2021-11-29 18:11:14 +01:00
Gnome Ann 9f51c42dd4 Allow bad words filter to ban <|endoftext|> token
The official transformers bad words filter doesn't allow this by
default. Finetune's version does allow this by default, however.
2021-11-27 11:42:06 -05:00
henk717 2bc93ba37a
Whitelist 6B in breakmodel
Now that we properly support it, allow the menu option to use breakmodel
2021-11-27 10:09:54 +01:00
henk717 b56ee07ffa
Fix for CPU mode
Recent optimizations caused the CPU version to load in an incompatible format, now we convert it back to the correct format after loading it efficiently first.
2021-11-27 05:34:29 +01:00
Gnome Ann e5e2fb088a Remember to actually import `GPTJModel` 2021-11-26 12:38:52 -05:00
Gnome Ann 871ed65570 Remove an unnecessary `**maybe_low_cpu_mem_usage()` 2021-11-26 11:42:04 -05:00
Gnome Ann a93a76eb01 Load model directly in fp16 if using GPU or breakmodel 2021-11-26 10:55:52 -05:00
Gnome Ann 32e1d4a7a8 Enable `low_cpu_mem_usage` 2021-11-25 18:09:25 -05:00
Gnome Ann 25c9be5d02 Breakmodel support for GPTJModel 2021-11-25 18:09:16 -05:00
Gnome Ann f8bcc3411b In breakmodel mode, move layers to GPU as soon as model loads
Rather than during the first generation.
2021-11-25 11:44:41 -05:00
Gnome Ann cbb6efb656 Move TFS warper code into aiserver.py 2021-11-24 13:36:54 -05:00
henk717 a2c82bbcc8 num_layers fixes
As requested by VE_FORBRYDERNE (Possibly implemented it on to many places, needs testing but since the other one is already broken I am committing it first so I can more easily test)
2021-11-24 03:44:11 +01:00
henk717 d7a2424d2d No Half on CPU
Should fix CPU executions
2021-11-23 17:14:01 +01:00
Gnome Ann be0881a8d0 Use model.config.n_layer if model.config.num_layers doesn't exist 2021-11-23 10:09:24 -05:00
Gnome Ann 9b8bcb5516 Always convert soft prompt to float32 if using TPU backend
TPUs do not support float16. Attempting to use a float16 soft prompt
throws an error.
2021-11-21 18:22:10 -05:00
Gnome Ann e068aa9f26 Add soft prompt support to TPU backend 2021-11-21 18:08:04 -05:00
Gnome Ann df2768b745 Simplify the comment regex 2021-11-21 01:09:19 -05:00
Gnome Ann 7ab0d96b8a Change the comment regex again to use fixed-length lookbehind 2021-11-21 01:06:31 -05:00
Gnome Ann 624cfbd5a4 Use a smarter regex for comments
If the beginning of the comment is at the beginning of a line AND the
end of a comment is at the end of a line, an additional newline will now
be ignored so that the AI doesn't see a blank line where the comment
was.

For example, consider the following message:
```
Hello
<|This is
  a comment|>
World
```

The AI will now see this:
```
Hello
World
```

instead of this:
```
Hello

World
```
2021-11-21 00:42:57 -05:00
Gnome Ann a51f88aeb3 Also apply comment formatting to prompt in `refresh_story()` 2021-11-21 00:26:45 -05:00
Gnome Ann 1968be82bb Remove comments from prompt in WI processor and InferKit mode 2021-11-20 22:23:06 -05:00
Gnome Ann 8ce8e621ce Fix typo (one of the `comregex_ui` should be `comregex_ai`) 2021-11-20 22:19:12 -05:00
Gnome Ann c2ed31de28 Add syntax for comments <|...|> 2021-11-20 01:27:57 -05:00
Gnome Ann a65c4de840 Integrate TPU backend
This commit puts the TPU backend code directly in to the KoboldAI code
to make it easier to modify.
2021-11-19 18:06:57 -05:00
henk717 4a678deaa5
Merge branch 'KoboldAI:main' into united 2021-11-18 06:51:44 +01:00
henk717 b25c54cf91 Polishing and Optimizations
Multiple things have changed, for now models default to half mode even on the official transformers to make sure its as efficient on the GPU as finetune's. GPU selection is streamlined and cache files are now stored inside the KoboldAI folder (for the most part). A new command line parameter to force the models to run at their full size still needs to be added for the few users that would want a quality bump at the cost of ram.
2021-11-18 00:06:57 +01:00
henk717 27ee45b9cc
Merge pull request #31 from VE-FORBRYDERNE/cpu
Fix gen_in device logic in generate()
2021-11-17 22:42:31 +01:00
Gnome Ann 2f0b673b28 Fix gen_in device logic in generate() 2021-11-17 16:37:37 -05:00
henk717 e71271933a
Merge pull request #29 from VE-FORBRYDERNE/hidden-size
Fix hidden size detection for GPTJForCausalLM
2021-11-17 22:30:24 +01:00
Gnome Ann a1bc10246c Support for multiple gens per action with dynamic scan 2021-11-17 16:17:59 -05:00
Gnome Ann 98a72e34a4 Replace slashes in model name with underscores 2021-11-17 15:36:36 -05:00
Gnome Ann ab1a65f13a Fix hidden size detection for GPTJForCausalLM 2021-11-15 11:56:02 -05:00
Gnome Ann 17d07b280a Correct `gpu_layers` to `gpu_blocks` 2021-11-14 21:08:49 -05:00
Gnome Ann 805cb0c8b9 Make sure device_config() still works with all layers on CPU 2021-11-14 18:46:00 -05:00
Gnome Ann 80aee07816 Use old GPU-only generation if all layers are on the same GPU
Apparently, this mode uses less RAM than breakmodel does.
2021-11-14 18:42:18 -05:00
Gnome Ann b0ab30cec4 Re-enable GPU-only generation option 2021-11-14 18:24:51 -05:00
henk717 3e38b462c6 Hidden Size fix for GPT2 Custom
Replaced the JS Hidden Size load with the newer function to fix these models
2021-11-14 16:40:04 +01:00
henk717 ecea169553 Improved Unix Support
Changes the line-endings to the Unix format and sets KoboldAI to launch with Python3 if executed directly.

(cherry picked from commit 5b0977ceb6807c0f80ce6717891ef5e23c8eeb77)
2021-11-13 21:54:32 -05:00
henk717 1596a238f7 Breakmodel automation
The only changes are a small addition to the breakmodel section where GPU0 is automatically chosen if the CLI options are used without specifying breakmodel. Lineendings have been changed to Linux formatting for compatibility reasons.
2021-11-14 03:13:52 +01:00
henk717 8a916116e3
Remove device=0 because of incompatibility
Device=0 breaks some of the pytorch implementations, removed to restore hardware compatibility to 0.16 levels.
2021-11-14 02:33:27 +01:00
henk717 4bcffc614e
Allow directly running KoboldAI from CLI in Linux
Its made for Python3, so we assume python3 is installed in its usual location. If it isn't you can always run it yourself with whatever command you used prior to this change.
2021-11-14 01:57:43 +01:00
henk717 21ae45e9ab
Merge branch 'KoboldAI:main' into united 2021-11-11 17:05:39 +01:00
Gnome Ann 1fadcbe1e3 Send allowsp command on connect instead of on startup 2021-11-11 00:18:46 -05:00
Gnome Ann 2fe815e092 Don't broadcast emit calls inside do_connect()
This prevents the "thinking" animation from appearing on top of the
submit button under certain circumstances:

* When someone connects to the KoboldAI server while the model is
  generating (occurs after generation finishes)
* Occasionally, the browser may suddenly disconnect and reconnect from
  Flask-SocketIO during generation, which causes the same problem
2021-11-11 00:14:12 -05:00
Gnome Ann 11b0291bc4 Use model.transformer.embed_dim if model.transformer.hidden_size doesn't exist 2021-11-10 17:47:14 -05:00
Gnome Ann 752e19a2bb Fix vars.modeldim not always being set 2021-11-10 17:38:30 -05:00
Gnome Ann 2679df9664 Merge branch 'main' into united 2021-11-09 21:33:14 -05:00
henk717 c2371cf801
Merge pull request #23 from VE-FORBRYDERNE/scan-test
Dynamic world info scan
2021-11-10 03:31:42 +01:00
henk717 4af0d9dabd
Merge pull request #78 from VE-FORBRYDERNE/patch
Allow remote mode to load from client-side story files
2021-11-06 16:58:05 +01:00
Gnome Ann 02a56945de Version bump 2021-11-06 11:50:56 -04:00
henk717 bc0f9c8032 Allow remote mode to load from client-side story files
(cherry picked from commit a1345263df)
2021-11-06 11:48:20 -04:00
Gnome Ann 7c099fe93c Allow remote mode to load from client-side story files 2021-11-04 19:33:17 -04:00
Gnome Ann 81bd058caf Make sure calcsubmitbudget uses the correct reference to vars.actions 2021-11-03 18:57:02 -04:00
Gnome Ann a2d7735a51 Dynamic WI scanner should ignore triggers that are already in context 2021-11-03 18:55:53 -04:00
Gnome Ann ecfbbdb4a9 Merge branch 'united' into scan-test 2021-11-03 18:23:22 -04:00
Gnome Ann 0fa47b1249 Fix budget calculation for stories with at least one non-prompt chunk 2021-11-03 18:22:31 -04:00
Gnome Ann c11dab894e Put placeholder variables into calcsubmitbudget 2021-11-03 18:02:19 -04:00
Gnome Ann 9b18068999 Shallow copy story chunks when generating 2021-11-03 17:53:38 -04:00
Gnome Ann b8c3d8c12e Fix generator output having the wrong length 2021-11-03 16:10:12 -04:00
Gnome Ann 5b3ce4510f Make sure that soft_tokens is on the correct device 2021-11-03 16:07:50 -04:00
Gnome Ann 90fd5a538a Merge branch 'united' into scan-test 2021-11-03 12:42:18 -04:00
Gnome Ann fe2987d894 Fix missing break statement in device_config 2021-11-03 12:42:04 -04:00
Gnome Ann bd76ab333c Set numseqs to 1 if using dynamic world info scan 2021-11-03 12:28:17 -04:00
Gnome Ann 0a91ea27b3 Make the dynamic world info scan toggleable 2021-11-03 12:18:48 -04:00
Gnome Ann de3664e73c Add an assertion for the value of already_generated 2021-11-03 12:01:45 -04:00
Gnome Ann ec8ec55256 Dynamic world info scan 2021-11-03 11:54:48 -04:00
henk717 aa998ba5e9
Merge pull request #20 from VE-FORBRYDERNE/sp
Soft prompt support for PyTorch models
2021-10-30 00:35:44 +02:00
Gnome Ann 206c01008e Fix budget calculation when using soft prompt 2021-10-29 11:44:51 -04:00
henk717 c9c370aa17
Merge branch 'KoboldAI:main' into united 2021-10-28 23:29:29 +02:00
Gnome Ann bf4e7742ac Patch GPTJForCausalLM, if it exists, to support soft prompting 2021-10-28 17:18:28 -04:00
Gnome Ann 40b4631f6c Clamp input_ids in place
Apparently transformers maintains an internal reference to input_ids
(to use for repetition penalty) so we have to clamp the internal
version, too, because otherwise transformers will throw an out-of-bounds
error upon attempting to access token IDs that are not in the
vocabulary.
2021-10-28 16:52:39 -04:00
Gnome Ann 24d5d63c9f Use the correct generation min and max when using soft prompt 2021-10-28 16:39:59 -04:00
Gnome Ann 511817132a Don't change the shape of transformer.wte 2021-10-28 15:39:59 -04:00
Gnome Ann a1ae11630a Make sure to cast vars.sp to the correct dtype 2021-10-28 13:22:07 -04:00
Gnome Ann 1556bd32a5 Use torch.where to inject the soft prompt instead of torch.cat 2021-10-28 13:20:14 -04:00
Gnome Ann 248e0bd24b Fix soft prompt loading code 2021-10-28 00:29:42 -04:00
Gnome Ann 4e3cc93020 Merge branch 'united' into sp 2021-10-23 11:45:03 -04:00
henk717 7b73d7cfdd Single Line Mode
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.

This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
Gnome Ann 1f449a9dda Soft prompt support (6B Colabs not supported yet) 2021-10-22 14:18:10 -04:00