Default settings for the new repetition penalty settings (Better suggestions very much welcome since broader community testing has not been done).
Updated the Readme with the link to the offline installer.
Ran into issues with other modes like chatmode and adventure, moved it further down the pipeline and converting </s> back to \n before processing additional formatting.
Still has an issue with the html formatting not working, but at least the AI works now.
On second thought, it is probably better to not save this. Advanced users can add this themselves and that way newer versions of the model can override it if redownloaded.
Allows model creators to customize the welcome message using Markdown and Limited HTML
Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).
In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.
More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.