Change max tokens to 4096
It works smoothly on the TPU colab, so lets allow it. People should not turn this all the way up unless they got the hardware, but we want to allow this for those that do.
This commit is contained in:
parent
a151e1a33a
commit
2fd544cad7
|
@ -70,7 +70,7 @@ gensettingstf = [{
|
|||
"label": "Max Tokens",
|
||||
"id": "settknmax",
|
||||
"min": 512,
|
||||
"max": 2048,
|
||||
"max": 4096,
|
||||
"step": 8,
|
||||
"default": 1024,
|
||||
"tooltip": "Max number of tokens of context to submit to the AI for sampling. Make sure this is higher than Amount to Generate. Higher values increase VRAM/RAM usage."
|
||||
|
|
Loading…
Reference in New Issue