Commit Graph

532 Commits

Author SHA1 Message Date
henk717 e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717 9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717 be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
henk717 a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717 4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717 2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717 81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
henk717 f93d489971 Update install_requirements.bat 2021-12-22 23:57:21 +01:00
henk717 9f86ca5be5 Force Temp Location
Conda breaks if the username has spaces when it tries to use temp, added a workaround that forces our directory to be used as temp for kobold.
2021-12-22 21:56:57 +01:00
henk717 41d7c2acfe
Merge pull request #48 from VE-FORBRYDERNE/patch
Disable `low_cpu_mem_usage` when using GPT-2
2021-12-21 02:45:44 +01:00
Gnome Ann caef3b7460 Disable `low_cpu_mem_usage` when using GPT-2
Attempting to use transformers 4.11.0's experimental `low_cpu_mem_usage`
feature with GPT-2 models usually results in the output repeating a
token over and over or otherwise containing an incoherent response.
2021-12-20 19:54:19 -05:00
henk717 7b56940ed7
Merge pull request #47 from VE-FORBRYDERNE/scripting
Lua API fixes
2021-12-20 04:32:25 +01:00
henk717 8ee250e422
Merge pull request #46 from VE-FORBRYDERNE/wi-patch
Fix a bug where WI entries sometimes can't be deleted if the current story was loaded from a save
2021-12-20 04:31:32 +01:00
Gnome Ann 7dd319491d Fix `compute_context()` method of world info folders 2021-12-19 20:46:37 -05:00
Gnome Ann 341b153360 Lua API fixes
* `print()` and `warn()` now work correctly with `nil` arguments
* Typo: `gpt-neo-1.3M` has been corrected to `gpt-neo-1.3B`
* Regeneration is no longer triggered when writing to `keysecondary` of
  a non-selective key
* Handle `genamt` changes in generation modifier properly
* Writing to `kobold.settings.numseqs` from a generation modifier no
  longer affects
* Formatting options in `kobold.settings` have been fixed
* Added aliases for setting names
* Fix behaviour of editing story chunks from a generation modifier
* Warnings are now yellow instead of red
* kobold.logits is now the raw logits prior to being filtered, like
  the documentation says, rather than after being filtered
* Some erroneous comments and error messages have been corrected
* These parts of the API have now been implemented properly:
    * `compute_context()` methods
    * `kobold.authorsnote`
    * `kobold.restart_generation()`
2021-12-19 20:18:28 -05:00
Gnome Ann 6aba869fb7 Make sure uninitialized WI entries are given UIDs when loading saves 2021-12-18 18:00:06 -05:00
henk717 4bb5e59d82
Merge pull request #45 from VE-FORBRYDERNE/scripting
Fix behaviour of `kobold.outputs` with read-only and no prompt gen
2021-12-17 19:45:31 +01:00
Gnome Ann 12718dbe24 Try long-polling first, then try websocket
This makes it so that SocketIO uses long polling to set up the
connection before switching to websocket, instead of immediately using
websocket.

This seems to resolve issues where the browser sometimes can't connect
to the websocket server until the window has been open for a minute.
2021-12-17 13:18:47 -05:00
Gnome Ann 769333738d Fix behaviour of `kobold.outputs` with read-only and no prompt gen 2021-12-17 12:59:01 -05:00
henk717 d15b43e20e Not always a list of strings
kobold.outputs must be a 1D list of strings, but sometimes its still blank. In those cases rather than throwing an error and crashing the scripting its better if it does nothing.
2021-12-16 12:54:10 +01:00
henk717 6d9063fb8b No Prompt Gen
Allow people to enter a prompt without generating anything by the AI. Combined with the always add prompt this is a very useful feature that allows people to write world information first, and then do a specific action. This mimics the behavior previously seen in AI Dungeon forks where it prompts for world information and then asks an action and can be particularly useful for people who want the prompt to always be part of the generation.
2021-12-16 12:47:44 +01:00
henk717 f3b4ecabca
Merge pull request #44 from VE-FORBRYDERNE/patch
Fix an error that occurs when all layers are on second GPU
2021-12-16 01:43:03 +01:00
henk717 e3d9c2d690 New download machanism
Automatically converts Huggingface cache models to full models on (down)load.
WARNING: Does wipe old cache/ dir inside the KoboldAI folder, make a backup before you run these models if you are bandwith constraint.
2021-12-16 01:40:04 +01:00
Gnome Ann 19d2356253 Fix an error that occurs when all layers are on second GPU 2021-12-15 19:03:49 -05:00
henk717 5e3e3f3578 Fix float16 models
Forcefully convert float16 models to work on the CPU
2021-12-16 00:31:51 +01:00
henk717 46b0473229
Merge pull request #43 from VE-FORBRYDERNE/dynamic-scan-patch
Dynamic scan patch
2021-12-15 09:45:07 +01:00
Gnome Ann 9097aac4a8 Show full stack trace for generator errors to help in diagnosing errors 2021-12-15 02:03:08 -05:00
Gnome Ann 2687135e05 Fix a strange bug where max tokens was capped at 1024
This seems to be related to the model config files, because only certain
models have this problem, and replacing ALL configuration files of a
"bad" model with those of a "good" model of the same type would fix the
problem.

Shouldn't be required anymore.
2021-12-15 00:45:41 -05:00
Gnome Ann 1551c45ba4 Prevent dynamic scanning from generating too many tokens 2021-12-14 23:39:04 -05:00
Gnome Ann 629988ce13 Fix a problem with the Lua regeneration API
It was an egregious typo that caused tokens to be rearranged on
regeneration.
2021-12-14 23:04:03 -05:00
henk717 56679d775f Update update-kobold.bat
Improved reliability
2021-12-14 18:53:52 +01:00
henk717 6670168a47 Update aiserver.py 2021-12-14 16:26:23 +01:00
henk717 cb98462b02 Replace update with switch
My idea to checkout the used branch failed making the updater obsolete, rebranded the switcher to the updater.
2021-12-14 16:00:35 +01:00
henk717 c701fdce1d Update update-kobold.bat 2021-12-14 15:55:44 +01:00
henk717 2d1561aa55 Version Switcher
Allows people to easily switch between different versions of KoboldAI. The stable one, United or their own. Compatible with my earlier update script.
2021-12-14 15:24:56 +01:00
henk717 b824ce0b33 Update Script
Updates KoboldAI to the latest official version, if you want to use United you first manually need to be on that git branch otherwise it gets overwritten with the old one.
2021-12-14 14:26:04 +01:00
henk717 c5dec67f13 Fix Netbase on Colab
Apparently Colab does not properly have netbase which we now use for the proper websocket support, now the installer forces it to be correctly installed so we don't crash on launch.
2021-12-14 04:02:33 +01:00
henk717 0f06cee272 Don't upload developer userscripts
We may want to bundle these at some point, but in that case you should make an exception like we do for the sample story. The same applies to Softprompts.
2021-12-14 02:55:16 +01:00
henk717 c5ade0333a Userscript support on GDrive
Make the Colab's create and map a userscripts folder.
2021-12-14 02:51:39 +01:00
henk717 18ddd77337 Apply VE's changes to colab 2021-12-14 02:13:43 +01:00
henk717 a0ccbda6b1
Merge pull request #42 from VE-FORBRYDERNE/scripting
Lua scripting
2021-12-14 02:12:09 +01:00
Gnome Ann 6e6e0b2b4d Allow Lua to stop generation from input modifier 2021-12-13 19:32:01 -05:00
Gnome Ann e9ed8602b2 Add a "corescript" setting 2021-12-13 19:28:33 -05:00
Gnome Ann e5bb20cc8f Fix Lua regeneration system 2021-12-13 19:17:18 -05:00
Gnome Ann 462040ed6f Restore missing `loadsettings()` call 2021-12-13 18:39:33 -05:00
Gnome Ann 661cca63e8 Make sure stopping criteria still work with dynamic scan off 2021-12-13 18:10:51 -05:00
Gnome Ann 338d437ea3 Use eventlet instead of gevent-websocket 2021-12-13 17:19:04 -05:00
Gnome Ann fb6762bc1a Add "AVAILABLE" and "ACTIVE" headings to userscript menu 2021-12-13 12:45:52 -05:00
Gnome Ann ed9c2a4d52 Fix a bug that occurs when userscript doesn't have all 3 modifiers 2021-12-13 11:50:10 -05:00
Gnome Ann 34c52a1a23 Remove escape characters from all error messages 2021-12-13 11:47:34 -05:00