Models can no longer override client settings, instead settings are now saved on a model per model basis with the settings provided by the model being the default. Users can also specify the desired configuration name as a command line parameter to avoid conflicting file names (Such as all Colabs having Colab.settings by default).
Many models have that one setting that just work best, like repetition penalty 2 or 1.2 while being incompatible with existing settings. Same applies for Adventure mode on or off. With this change models are allowed to override user preferences but only for the categories we deem this relevant (We don't want them to mess with things like tokens, length, etc). For users that do not want this behavior this can be turned off by changing msoverride to false in the client.settings.
Model creators can specify these settings in their config.json with the allowed settings being identical to their client.settings counterparts.
Bit of a workaround for now, but the [ badwords search routine has been replaced with a hardcoded list used by the colabs. This is far more effective at filtering out artifacts when running models locally. We can get away with this because all known models use the same vocab.json, in the future we will probably want to load this from badwords.json if present so model creators can bundle this with the model.
Replaces the placeholder readme with a proper one, the menu is also updated and reorganized to encourage users to use custom models and to better reflect the real world VRAM requirements.
Since some user interface buttons are disabled while in --remote mode,
they should also be disabled in aiserver.py so a malicious user can't
manually send those commands to the server.
Not just saving in .json but also in plain text, should help story writers get their stories out more easily. Especially since they can technically add some markdown into their stories manually in the interface.
* Back and Retry buttons no longer pop a story chunk while in the
"Select sequence to keep" menu
* No longer freezes if you retry with no story chunks beyond the initial
prompt chunk
* When "Always Add Prompt" is on, allow Retry even if the prompt is the
only chunk in the story
* Added error messages for Back and Retry buttons
Some colab's use KoboldAI as a subprocess, rather than making that to complicated for Colab developers its better to just dump the Cloudflare link to a log, in addition to showing the message on screen. That way if KoboldAI itself gets filtered you can easily cat the link or use the existing link grabbing methods.
Multiplayer support was causing all players to automatically submit authors notes. This is now fixed only the person submitting the authors notes counts.
Still VERY far from ideal for multiplayer, only one person can realistically edit it at a time. Whoever submits counts. Will need more major interface changes so things can be submitted one by one. But hey, it works and its good enough for a group of friends to play the game :D
Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking arugment for argument index in method wrapper_index_select)
First step towards native Colab support, built in Cloudflare tunnels easily allows players to play KoboldAI on another device. This mode also removes buttons that would get you stuck if you have no local PC access.
Added ability to import World Info files from AI Dungeon.
Added slider for setting World Info scan depth.
Added toggle to control whether prompt is submitted each action.
Added 'Read Only' mode with no AI to startup.
Fixed GPU/CPU choice prompt appearing when GPU isn't an option.
Added error handling to generator calls for CUDA OOM message
Added generator parameter to only return new text
Added in-browser Save/Load/New Story controls
(Force a full refresh in your browser!)
Fixed adding InferKit API key if client.settings already exists
Added cmd calls to bat files so they'll stay open on error
Wait animation now hidden on start state/restart
Improvements to pruning context from text returned from the AI
Colab errors should no longer throw JSON decode errors in client
Improved logic for World Info scanning
Fix for index error in addsentencespacing
Expanded bad_word flagging for square brackets to combat Author's Note leakage
World Info should now work properly if you have an Author's Note defined
Set generator to use cache to improve performance of custom Neo models
Added error handling for Colab disconnections
Now using tokenized & detokenized version of last action to parse out new content
Updated readme
Added square bracket to bad_words_ids to help suppress AN tag from leaking into generator output
Added version number to CSS/JS ref to address browser loading outdated versions from cache
Bugfix for New Story function not clearing World Info from previous story
Torch will not be initialized unless you select a local model, as there's no reason to invoke it for InferKit/Colab
Changed JSON file writes to use indentation for readability
Help text added to settings items.
Settings now saved to client file when changed.
Separated transformers settings and InferKit settings.
Reorganized model select list.