344 Commits

Author SHA1 Message Date
henk717
00a0cea077 Update aiserver.py 2021-12-24 23:15:01 +01:00
henk717
1d3370c995 Community model
First batch, will be more, we will also need to update the other VRAM display's with the changes that have happened. Will happen depending on how the 8-bit stuff goes.
2021-12-24 23:12:14 +01:00
henk717
6952938e88 Update aiserver.py
Crash fix
2021-12-24 19:57:08 +01:00
Gnome Ann
b2def30d9d Update Lua modeltype and model API 2021-12-23 16:35:52 -05:00
Gnome Ann
c305076cf3 Fix the behaviour of the softprompt setting 2021-12-23 16:14:09 -05:00
Gnome Ann
00f611b207 Save softprompt filename in settings 2021-12-23 13:02:11 -05:00
Gnome Ann
924c48a6d7 Merge branch 'united' into gui-and-scripting 2021-12-23 13:02:01 -05:00
henk717
cae0f279e2 Restore Lowmem
Accidentally got replaced in one of my test runs
2021-12-23 18:50:01 +01:00
henk717
25a6e489c1 Remove Replace from Huggingface
Accidentally ended up in the wrong section, for downloads we do not replace anything only afterwards.
2021-12-23 17:27:09 +01:00
henk717
e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717
9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717
be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
Gnome Ann
8452940597 Merge branch 'united' into gui-and-scripting 2021-12-23 00:18:11 -05:00
Gnome Ann
2a4d7448be Make dynamic_processor_wrap execute warper conditionally
The top-k warper doesn't work properly with an argument of 0, so there
is now the ability to not execute the warper if a condition is not met.
2021-12-22 23:46:25 -05:00
Gnome Ann
7e06c25011 Display icons for active userscripts and softprompts
Also fixes the userscript menu so that the active userscripts preserve
the previously selected order as was originally intended.
2021-12-22 23:33:27 -05:00
henk717
a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717
4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717
2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717
81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
Gnome Ann
c549ea04a9 Always use all logit warpers
Now that the logit warper parameters can be changed mid-generation by
generation modifiers, the logit warpers have to be always on.
2021-12-22 17:29:07 -05:00
Gnome Ann
1e1b45d47a Add support for multiple library paths in bridge.lua 2021-12-22 14:24:31 -05:00
Gnome Ann
fc04ff3a08 World info folders can now be collapsed by clicking on the folder icon 2021-12-22 13:12:35 -05:00
Gnome Ann
d538782b1e Add individual configuration files for userscripts 2021-12-22 02:59:31 -05:00
Gnome Ann
380b54167a Make transformers warpers dynamically update their parameters
So that if you change, e.g., `top_p`, from a Lua generation modifier or
from the settings menu during generation, the rest of the generation
will use the new setting value instead of retaining the settings it had
when generation began.
2021-12-21 22:12:24 -05:00
Gnome Ann
caef3b7460 Disable low_cpu_mem_usage when using GPT-2
Attempting to use transformers 4.11.0's experimental `low_cpu_mem_usage`
feature with GPT-2 models usually results in the output repeating a
token over and over or otherwise containing an incoherent response.
2021-12-20 19:54:19 -05:00
henk717
7b56940ed7
Merge pull request #47 from VE-FORBRYDERNE/scripting
Lua API fixes
2021-12-20 04:32:25 +01:00
Gnome Ann
341b153360 Lua API fixes
* `print()` and `warn()` now work correctly with `nil` arguments
* Typo: `gpt-neo-1.3M` has been corrected to `gpt-neo-1.3B`
* Regeneration is no longer triggered when writing to `keysecondary` of
  a non-selective key
* Handle `genamt` changes in generation modifier properly
* Writing to `kobold.settings.numseqs` from a generation modifier no
  longer affects
* Formatting options in `kobold.settings` have been fixed
* Added aliases for setting names
* Fix behaviour of editing story chunks from a generation modifier
* Warnings are now yellow instead of red
* kobold.logits is now the raw logits prior to being filtered, like
  the documentation says, rather than after being filtered
* Some erroneous comments and error messages have been corrected
* These parts of the API have now been implemented properly:
    * `compute_context()` methods
    * `kobold.authorsnote`
    * `kobold.restart_generation()`
2021-12-19 20:18:28 -05:00
Gnome Ann
6aba869fb7 Make sure uninitialized WI entries are given UIDs when loading saves 2021-12-18 18:00:06 -05:00
Gnome Ann
769333738d Fix behaviour of kobold.outputs with read-only and no prompt gen 2021-12-17 12:59:01 -05:00
henk717
6d9063fb8b No Prompt Gen
Allow people to enter a prompt without generating anything by the AI. Combined with the always add prompt this is a very useful feature that allows people to write world information first, and then do a specific action. This mimics the behavior previously seen in AI Dungeon forks where it prompts for world information and then asks an action and can be particularly useful for people who want the prompt to always be part of the generation.
2021-12-16 12:47:44 +01:00
henk717
f3b4ecabca
Merge pull request #44 from VE-FORBRYDERNE/patch
Fix an error that occurs when all layers are on second GPU
2021-12-16 01:43:03 +01:00
henk717
e3d9c2d690 New download machanism
Automatically converts Huggingface cache models to full models on (down)load.
WARNING: Does wipe old cache/ dir inside the KoboldAI folder, make a backup before you run these models if you are bandwith constraint.
2021-12-16 01:40:04 +01:00
Gnome Ann
19d2356253 Fix an error that occurs when all layers are on second GPU 2021-12-15 19:03:49 -05:00
henk717
5e3e3f3578 Fix float16 models
Forcefully convert float16 models to work on the CPU
2021-12-16 00:31:51 +01:00
Gnome Ann
9097aac4a8 Show full stack trace for generator errors to help in diagnosing errors 2021-12-15 02:03:08 -05:00
Gnome Ann
2687135e05 Fix a strange bug where max tokens was capped at 1024
This seems to be related to the model config files, because only certain
models have this problem, and replacing ALL configuration files of a
"bad" model with those of a "good" model of the same type would fix the
problem.

Shouldn't be required anymore.
2021-12-15 00:45:41 -05:00
Gnome Ann
1551c45ba4 Prevent dynamic scanning from generating too many tokens 2021-12-14 23:39:04 -05:00
Gnome Ann
629988ce13 Fix a problem with the Lua regeneration API
It was an egregious typo that caused tokens to be rearranged on
regeneration.
2021-12-14 23:04:03 -05:00
henk717
6670168a47 Update aiserver.py 2021-12-14 16:26:23 +01:00
Gnome Ann
6e6e0b2b4d Allow Lua to stop generation from input modifier 2021-12-13 19:32:01 -05:00
Gnome Ann
e9ed8602b2 Add a "corescript" setting 2021-12-13 19:28:33 -05:00
Gnome Ann
e5bb20cc8f Fix Lua regeneration system 2021-12-13 19:17:18 -05:00
Gnome Ann
462040ed6f Restore missing loadsettings() call 2021-12-13 18:39:33 -05:00
Gnome Ann
661cca63e8 Make sure stopping criteria still work with dynamic scan off 2021-12-13 18:10:51 -05:00
Gnome Ann
338d437ea3 Use eventlet instead of gevent-websocket 2021-12-13 17:19:04 -05:00
Gnome Ann
34c52a1a23 Remove escape characters from all error messages 2021-12-13 11:47:34 -05:00
Gnome Ann
11f9866dbe Enable more of the IO library in Lua sandbox
Also changes the Lua warning color to red.
2021-12-13 11:22:58 -05:00
Gnome Ann
28e86563b8 Change self.scores to scores in aiserver.py 2021-12-13 11:18:01 -05:00
Gnome Ann
82e149ee02 Catch Lua errors properly 2021-12-13 02:32:09 -05:00
Gnome Ann
5f06d20085 Format Lua printed messages and warnings 2021-12-13 01:59:53 -05:00