Safer defaults and more flexibility

There have been a lot of reports from newer users who experience AI breakdown because not all models properly handle 2048 max tokens. 1024 is the only value that all models support and was the original value KoboldAI used. This commit reverts the decision to increase this to 2048, any existing configurations are not effected. Users who wish to increase the max tokens can do so themselves. Most models handle up to 1900 well (The GPT2 models are excluded), for many you can go all the way. (It is currently not yet known why some finetunes cause a decrease in maxtoken support,

In addition this commit contains a request for more consistent slider behavior, allowing the sliders to be changed at 0.01 intervals instead of some sliders being capped to 0.05.
This commit is contained in:
Henk 2022-07-03 15:07:54 +02:00
parent e2f7fed99f
commit d041ec0921
2 changed files with 5 additions and 5 deletions

View File

@ -215,7 +215,7 @@ class vars:
model_type = "" # Model Type (Automatically taken from the model config)
noai = False # Runs the script without starting up the transformers pipeline
aibusy = False # Stops submissions while the AI is working
max_length = 2048 # Maximum number of tokens to submit per action
max_length = 1024 # Maximum number of tokens to submit per action
ikmax = 3000 # Maximum number of characters to submit to InferKit
genamt = 80 # Amount of text for each action to generate
ikgen = 200 # Number of characters for InferKit to generate

View File

@ -17,7 +17,7 @@ gensettingstf = [
"id": "settemp",
"min": 0.1,
"max": 2.0,
"step": 0.05,
"step": 0.01,
"default": 0.5,
"tooltip": "Randomness of sampling. High values can increase creativity but may make text less sensible. Lower values will make text more predictable but can become repetitious."
},
@ -28,7 +28,7 @@ gensettingstf = [
"id": "settopp",
"min": 0.0,
"max": 1.0,
"step": 0.05,
"step": 0.01,
"default": 0.9,
"tooltip": "Used to discard unlikely text in the sampling process. Lower values will make text more predictable but can become repetitious. (Put this value on 1 to disable its effect)"
},
@ -50,7 +50,7 @@ gensettingstf = [
"id": "settfs",
"min": 0.0,
"max": 1.0,
"step": 0.05,
"step": 0.01,
"default": 1.0,
"tooltip": "Alternative sampling method; it is recommended to disable top_p and top_k (set top_p to 1 and top_k to 0) if using this. 0.95 is thought to be a good value. (Put this value on 1 to disable its effect)"
},
@ -61,7 +61,7 @@ gensettingstf = [
"id": "settypical",
"min": 0.0,
"max": 1.0,
"step": 0.05,
"step": 0.01,
"default": 1.0,
"tooltip": "Alternative sampling method described in the paper \"Typical Decoding for Natural Language Generation\" (10.48550/ARXIV.2202.00666). The paper suggests 0.2 as a good value for this setting. Set this setting to 1 to disable its effect."
},