To avoid breaking changes lets force the exact transformers version we code against. This will be automatically picked up by all the automatic updaters.
Adding psutil from conda to avoid the need for a compiler, finetuneanon should no longer be used. If people really want to use it they are on their own.
Transformers issued an important change for the OPT models breaking their compatibility with all older versions. In order for people to be able to use all models on the menu they need 4.20.1 so this is now forced in the dependencies making the update easier.
Transformers 4.20 now requires accelerate to be installed for some of the features we use in KoboldAI. This is now a required dependency for updated users.
Makes transformers 4.20 mandatory in the dependency lists, not because the old versions are no longer supported but because it contains fixes that benefit our users and this makes it easier for them to update to it. If you stick to an older version the OPT and XGLM workarounds we have in place will remain functional, but you miss on the enhancements newer transformers versions bring.
Huggingface's repo is further behind than conda-forge so we will no longer offer it in the installer. The more is loaded from conda-forge the better. The same transformers package will still be installed, but a newer one from conda-forge is now guaranteed.
Allows model creators to customize the welcome message using Markdown and Limited HTML
Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
Updates dependencies, play.sh didn't work properly so removing that for now since manually running aiserver.py is superior on Linux until I can get conda to init inside the script
Seperate file so people can easily go back to the legacy implementation based on finetune (Recommended until Huggingface's compatibility is improved) . You can install and use both.
Allows anyone to easily create a ROCm compatible conda environment. Currently set to the newer transformers, you can edit the github link if you want the finetune one.
Finetune's fork has unofficial support which we supported, but this is not compatible with models designed for the official version. In this update we let models decide which transformers backend to use, and fall back to Neo if they don't choose any. We also add the 6B to the menu and for the time being switch to the github version of transformers to be ahead of the waiting time. (Hopefully we can switch back to the conda version before merging upstream).