I don't know why Firefox is so weird but we need to do this or else
some chunks don't update properly when you edit multiple chunks at the
same time.
One example of when this happened is when you have a story with at least
one chunk other than the prompt. Then, if you select the entire story
except for the first few characters of the prompt and then delete the
selected characters, and then defocus the story to save your changes,
the last chunk in the story will not register as having been deleted,
which you can verify if you refresh the page.
Another example. If your story has a chunk with no trailing newlines
followed by a chunk with exactly two leading newlines, if you delete
the first newline from the latter chunk and then defocus to save your
edits, the newline will be there again when you refresh the page.
There was an unneeded scrollbar growing as you were expanding towards the top of the screen, now this scrollbar is hidden until you actually need it making the UI look a bit more polished.
So that if the animation is triggered multiple times you don't have to
wait for all the animations to be performed one after the other before
you can finally manually scroll.
For stories that are long enough for the scroll bar to appear on the
screen, Firefox on desktop would originally only allow you to start
editing if you click on the actual text, i.e. you couldn't click on the
blank part of a line. This behaviour is now fixed.
It wouldn't trigger any events originally when you click on parts of the
story text area that didn't contain any text, e.g. on a blank line or on
the blank part of a line to the right of the actual text.
A new model was released that uses a different formatting for its enters, this causes to many enters in the UI. In this change we fix the issue so that when this happens the UI still displays the content as you would expect. Removing the formatting burden from the Model developers.
Originally omitted when model settings were forced. Now that models can only define the defaults for KoboldAI its a good idea to give model authors control over what formatting they think works best for their models.
If you save a story as a different name than it was loaded with, and
then try to download it as JSON/plaintext, the downloaded file's name
will now match the new story name.
This prevents duplicate submissions when multiple people are connected
to the same server and one person submits changes to memory, author's
note or world info, by pressing Submit (for author's note or memory) or
Accept (for world info).
Improved the default settings, better distinction on client / server. The python parts have been renamed to server, the browser to the client to be conform what you'd expect from a client and a server. The model name will also be shown now instead of NeoCustom.
Models can no longer override client settings, instead settings are now saved on a model per model basis with the settings provided by the model being the default. Users can also specify the desired configuration name as a command line parameter to avoid conflicting file names (Such as all Colabs having Colab.settings by default).
Many models have that one setting that just work best, like repetition penalty 2 or 1.2 while being incompatible with existing settings. Same applies for Adventure mode on or off. With this change models are allowed to override user preferences but only for the categories we deem this relevant (We don't want them to mess with things like tokens, length, etc). For users that do not want this behavior this can be turned off by changing msoverride to false in the client.settings.
Model creators can specify these settings in their config.json with the allowed settings being identical to their client.settings counterparts.
Bit of a workaround for now, but the [ badwords search routine has been replaced with a hardcoded list used by the colabs. This is far more effective at filtering out artifacts when running models locally. We can get away with this because all known models use the same vocab.json, in the future we will probably want to load this from badwords.json if present so model creators can bundle this with the model.