They released another version of transformers that still doesn't have
the OPT patch so I decided it would be safer to just mark all 4.19
transformers versions as needing the OPT patch.
As part of the restructuring essential code was removed that handled the --path parameter correctly. This has now been restored. Selectfolder was also updated to use its NeoCustom counterpart instead of specifying a model so that the underlying code that corrects model names is being hit again.
This adds support for loading settings from the defaults folder, settings are loaded in the following order and overwritten if needed by the higher number.
1. The model config file.
2. The defaults folder.
3. The users defined settings file.
With this support we can begin to ship better defaults for models we do not manage. Our community tuners have been most helpful at adding good defaults to their configuration files, but for other models such as the base models this gives us the flexibility to define better settings for each model without messing with a users desired settings if they already exist.
Almost everyone prefers 2048 max tokens because of the superior coherency. It should only be lower due to ram limits, but the menu already shows the optimal ram for 2048. Negatively effected users can turn it down themselves, for everyone else especially on rented machines or colab 2048 is a better default.
Generated using:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("facebook/opt-350m", fast=False)
badwordsids_opt = [[v] for k, v in tokenizer.vocab.items() if any(c in k for c in "<>[]")]
```