Commit Graph

727 Commits

Author SHA1 Message Date
2c7f2e2014 Pollinations: fix headers, add samplers 2025-05-26 23:18:51 +03:00
d0bc58acf2 Fix CC rename spazzing out on hashtags 2025-05-24 00:20:09 +03:00
58832d1a75 MistralAI: add devstral models 2025-05-23 20:16:56 +03:00
62c2c88a79 + captioning and multimodal 2025-05-22 21:17:34 +03:00
edf307aa9c claude 4 2025-05-22 21:14:13 +03:00
ade45b6cd1 Allow prompt post-processing for all sources. Add 'single user msg' processing (#4009)
* Allow prompt post-processing for all sources. Add 'single user msg' PPP type

* Fix copilot comments

* Fix typo in element id

* Remove redundant conditions

* Lint fix

* Add link to PPP docs
2025-05-22 20:36:22 +03:00
157315cd68 Add Vertex AI express mode support (#3977)
* Add Vertex AI express mode support
Split Google AI Studio and Vertex AI

* Add support for Vertex AI, including updating default models and related settings, modifying frontend HTML to include Vertex AI options, and adjusting request processing logic in the backend API.

* Log API name in the console

* Merge sysprompt toggles back

* Use Gemma tokenizers for Vertex and LearnLM

* AI Studio parity updates

* Add link to express mode doc. Also technically it's not a form

* Split title

* Use array includes

* Add support for Google Vertex AI in image captioning feature

* Specify caption API name, add to compression list

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2025-05-22 20:10:53 +03:00
864a733663 "Bind preset to connection" toggle (#3999)
* Implement THE TOGGLE

* Don't force reconnect on preset switch if toggle off

* Don't clear custom models list either
2025-05-17 20:40:58 +03:00
f6ab33d835 Reverse CC prompt manager's injection order of "Order" to match World Info (#4004)
* Reverse CC injection "Order" to match World Info

* Set CC injection order default to 100

* Update non-PM injects order + add hint

* Update default order value on inject

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2025-05-16 23:53:37 +03:00
6e35afa6ec Fix extension prompts injects 2025-05-13 10:04:43 +03:00
8100a542e2 Implement a priority for prompt injections in CC (#3978)
* Implement a priority for prompt injections in CC

Adds a numeric order for injected prompts, 0 being default and placed at the top, and higher numbers placing further down. If two messages have the same priority, then order is determined by role as was before.

* Update data-i18n for new setting field

* Rename priority to order, sort higher first/lower last

* Hide order when position is relative, adjust hint text

* Fix type error

* Fix capitalization

* Cut UI texts

* Reposition text labels

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2025-05-12 23:59:54 +03:00
420d568cd3 Pollinations - Text (#3985)
* [wip] Pollinations for text

* Implement generate API request

* Determine Pollinations model tools via models list

* Add Pollinations option to /model command

* Add Pollinations support to caption

* Update link to pollinations site

* Fix type errors in openai.js

* Fix API connection test to use AbortController for request cancellation

* Remove hard coded list of pollinations vision models

* Remove openai-audio from captioning models
2025-05-11 20:14:11 +03:00
09f2b2f731 Handle unknown chat completion sources gracefully by logging an error and returning an empty string 2025-05-11 11:09:15 +03:00
c6a64d8526 xAI: fix model not saving to presets 2025-05-09 00:24:36 +03:00
fa8ea7c60d mistral-medium-2505 2025-05-07 20:09:56 +03:00
7eb23a2fcc Work on tl 2025-04-29 17:23:18 +07:00
11908f7363 Work on tl 2025-04-28 18:45:16 +07:00
3e0697b7c7 Lintfix 2025-04-27 15:16:46 +03:00
acc05e633d gemini-exp to max_1mil context 2025-04-26 20:38:35 -05:00
c6a047651b Add 'learn' to visionSupportedModels
Also remove dead gemini-exp models
2025-04-26 14:23:10 -05:00
28d42e5200 Prune Google models 2025-04-26 11:39:44 -05:00
3fd12b28dc Merge branch 'staging' into vision-cleanup 2025-04-25 01:49:40 +09:00
903839c9c5 Use array syntax for excluding non-vision OpenAI models
Co-authored-by: Wolfsblvt <wolfsblvt@gmail.com>
2025-04-25 01:40:13 +09:00
5241b22a73 Add reasoning effort control for CC OpenRouter
Closes #3890
2025-04-23 21:38:31 +03:00
3e8f9e2680 Fix for eslint 2025-04-24 00:02:43 +09:00
bdf4241d18 Default to "Auto" reasoning effort 2025-04-23 14:54:34 +00:00
44c5ce9a30 Exclude o1-mini from vision supported models 2025-04-23 23:45:58 +09:00
65aec223a3 Vision models clean-up 2025-04-23 23:45:58 +09:00
5c8b8f4b98 Refactor getReasoningEffort 2025-04-23 00:44:14 +03:00
bee3cee740 Go team dropdown 2025-04-23 00:38:28 +03:00
a95056db40 Thinking Budget 2.5: Electric Googaloo 2025-04-21 21:10:40 +03:00
dd3d3226eb update per cohee recommendations 2025-04-20 14:20:13 -07:00
c63ef20919 change language when context size exceeded 2025-04-20 13:58:11 -07:00
53dd3aed4e Cleaning up and checking for vision support 2025-04-17 16:48:27 -04:00
c89c1beffd Added support for Gemini 2.5 Flash Preview 04/17 from Google AI Studio 2025-04-17 16:18:34 -04:00
7b2f1f7c7a Add o3 and o4-mini 2025-04-16 23:12:40 +03:00
722b0698e9 Fix reasoning content bleeding into multi-swipes 2025-04-16 21:35:35 +03:00
c3717ff06a Merge pull request #3852 from subzero5544/xAI-grok-reverse-proxy-testing
Adding reverse proxy support to xai chat completion
2025-04-16 21:14:38 +03:00
5510e6da31 Enable multi-swipe for xAI 2025-04-14 22:36:56 +03:00
36e3627705 gpt-4.1 2025-04-14 20:54:18 +03:00
78bda9954d Increase maximum injection depth and WI order (#3800) 2025-04-13 21:31:57 +03:00
22f1aee70b Add web search fee notice for OpenRouter
Closes #3833
2025-04-13 14:15:49 +03:00
91fc50b82d Merge branch 'staging' into gork-ai 2025-04-11 21:15:54 +03:00
1f27a39f29 Refactor mistral max context 2025-04-11 21:09:06 +03:00
70d65f2d05 Remove tools from grok-vision requests 2025-04-11 20:41:20 +03:00
6adce75933 Remove penalties from 3-mini requests 2025-04-11 20:02:42 +03:00
1d2122b867 Correct editing mistake in "Set correct Mistral AI token context limits." 2025-04-11 18:01:42 +03:00
2040c43371 Revert "Powers of 2 for token context limits. No -1 offset."
This reverts commit 2d77fb3e30.
2025-04-11 17:58:39 +03:00
2d77fb3e30 Powers of 2 for token context limits. No -1 offset. 2025-04-11 17:40:53 +03:00
0c4b0cfb03 Set correct Mistral AI token context limits. 2025-04-11 17:20:39 +03:00