* Implement a priority for prompt injections in CC
Adds a numeric order for injected prompts, 0 being default and placed at the top, and higher numbers placing further down. If two messages have the same priority, then order is determined by role as was before.
* Update data-i18n for new setting field
* Rename priority to order, sort higher first/lower last
* Hide order when position is relative, adjust hint text
* Fix type error
* Fix capitalization
* Cut UI texts
* Reposition text labels
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
* [wip] Pollinations for text
* Implement generate API request
* Determine Pollinations model tools via models list
* Add Pollinations option to /model command
* Add Pollinations support to caption
* Update link to pollinations site
* Fix type errors in openai.js
* Fix API connection test to use AbortController for request cancellation
* Remove hard coded list of pollinations vision models
* Remove openai-audio from captioning models
Turns out the doc is already alphabetized but with dead providers moved to the top, so I didn't have to alphabetize the whole list and manually remove the dead ones.
* Add min_keep, a llama.cpp-exclusive setting for constraining the effect of truncation samplers
* Enable nsigma for llama.cpp, and add llama.cpp alias top_n_sigma, add nsigma to the llama.cpp sampler order block
* Allow a negative value of nsigma as this represents 'disabled' in llama.cpp (while 0 is deterministic)
* Remove tfs and top_a as these are not supported by llama.cpp (tfs was removed, and top_a was never supported)
* Correct the identification string for typical_p in the llama.cpp sampler order block
* Add penalties to the llama.cpp sampler order block