This is to match the behaviour of the original transformers downloader
in order to deal with the rare case of someone downloading a model using
aria2, cancelling before it finishes, and then attempting to resume the
download with the normal transformers downloader.
Trunc prevents slow allocation on windows, force_download=True has proven a more reliable default. Since models are converted to local formats it does not impact local users. And because -c is used the impact of checking if the model is correct is desirable and minimal.
* Error messages are now shown when memory, author's note, etc. exceeds
budget by itself
* Formatting options no longer break if there are empty chunks in the
story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.
This can also be used as a foundation for a chatbot optimized interface mode.
Added in-browser Save/Load/New Story controls
(Force a full refresh in your browser!)
Fixed adding InferKit API key if client.settings already exists
Added cmd calls to bat files so they'll stay open on error
Wait animation now hidden on start state/restart
Improvements to pruning context from text returned from the AI
Colab errors should no longer throw JSON decode errors in client
Improved logic for World Info scanning
Fix for index error in addsentencespacing
Help text added to settings items.
Settings now saved to client file when changed.
Separated transformers settings and InferKit settings.
Reorganized model select list.