6e35afa6ec
Fix extension prompts injects
2025-05-13 10:04:43 +03:00
8100a542e2
Implement a priority for prompt injections in CC ( #3978 )
...
* Implement a priority for prompt injections in CC
Adds a numeric order for injected prompts, 0 being default and placed at the top, and higher numbers placing further down. If two messages have the same priority, then order is determined by role as was before.
* Update data-i18n for new setting field
* Rename priority to order, sort higher first/lower last
* Hide order when position is relative, adjust hint text
* Fix type error
* Fix capitalization
* Cut UI texts
* Reposition text labels
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com >
2025-05-12 23:59:54 +03:00
420d568cd3
Pollinations - Text ( #3985 )
...
* [wip] Pollinations for text
* Implement generate API request
* Determine Pollinations model tools via models list
* Add Pollinations option to /model command
* Add Pollinations support to caption
* Update link to pollinations site
* Fix type errors in openai.js
* Fix API connection test to use AbortController for request cancellation
* Remove hard coded list of pollinations vision models
* Remove openai-audio from captioning models
2025-05-11 20:14:11 +03:00
09f2b2f731
Handle unknown chat completion sources gracefully by logging an error and returning an empty string
2025-05-11 11:09:15 +03:00
c6a64d8526
xAI: fix model not saving to presets
2025-05-09 00:24:36 +03:00
fa8ea7c60d
mistral-medium-2505
2025-05-07 20:09:56 +03:00
7eb23a2fcc
Work on tl
2025-04-29 17:23:18 +07:00
11908f7363
Work on tl
2025-04-28 18:45:16 +07:00
3e0697b7c7
Lintfix
2025-04-27 15:16:46 +03:00
acc05e633d
gemini-exp to max_1mil context
2025-04-26 20:38:35 -05:00
c6a047651b
Add 'learn' to visionSupportedModels
...
Also remove dead gemini-exp models
2025-04-26 14:23:10 -05:00
28d42e5200
Prune Google models
2025-04-26 11:39:44 -05:00
3fd12b28dc
Merge branch 'staging' into vision-cleanup
2025-04-25 01:49:40 +09:00
903839c9c5
Use array syntax for excluding non-vision OpenAI models
...
Co-authored-by: Wolfsblvt <wolfsblvt@gmail.com >
2025-04-25 01:40:13 +09:00
5241b22a73
Add reasoning effort control for CC OpenRouter
...
Closes #3890
2025-04-23 21:38:31 +03:00
3e8f9e2680
Fix for eslint
2025-04-24 00:02:43 +09:00
bdf4241d18
Default to "Auto" reasoning effort
2025-04-23 14:54:34 +00:00
44c5ce9a30
Exclude o1-mini from vision supported models
2025-04-23 23:45:58 +09:00
65aec223a3
Vision models clean-up
2025-04-23 23:45:58 +09:00
5c8b8f4b98
Refactor getReasoningEffort
2025-04-23 00:44:14 +03:00
bee3cee740
Go team dropdown
2025-04-23 00:38:28 +03:00
a95056db40
Thinking Budget 2.5: Electric Googaloo
2025-04-21 21:10:40 +03:00
dd3d3226eb
update per cohee recommendations
2025-04-20 14:20:13 -07:00
c63ef20919
change language when context size exceeded
2025-04-20 13:58:11 -07:00
53dd3aed4e
Cleaning up and checking for vision support
2025-04-17 16:48:27 -04:00
c89c1beffd
Added support for Gemini 2.5 Flash Preview 04/17 from Google AI Studio
2025-04-17 16:18:34 -04:00
7b2f1f7c7a
Add o3 and o4-mini
2025-04-16 23:12:40 +03:00
722b0698e9
Fix reasoning content bleeding into multi-swipes
2025-04-16 21:35:35 +03:00
c3717ff06a
Merge pull request #3852 from subzero5544/xAI-grok-reverse-proxy-testing
...
Adding reverse proxy support to xai chat completion
2025-04-16 21:14:38 +03:00
5510e6da31
Enable multi-swipe for xAI
2025-04-14 22:36:56 +03:00
36e3627705
gpt-4.1
2025-04-14 20:54:18 +03:00
78bda9954d
Increase maximum injection depth and WI order ( #3800 )
2025-04-13 21:31:57 +03:00
22f1aee70b
Add web search fee notice for OpenRouter
...
Closes #3833
2025-04-13 14:15:49 +03:00
91fc50b82d
Merge branch 'staging' into gork-ai
2025-04-11 21:15:54 +03:00
1f27a39f29
Refactor mistral max context
2025-04-11 21:09:06 +03:00
70d65f2d05
Remove tools from grok-vision requests
2025-04-11 20:41:20 +03:00
6adce75933
Remove penalties from 3-mini requests
2025-04-11 20:02:42 +03:00
1d2122b867
Correct editing mistake in "Set correct Mistral AI token context limits."
2025-04-11 18:01:42 +03:00
2040c43371
Revert "Powers of 2 for token context limits. No -1 offset."
...
This reverts commit 2d77fb3e30
.
2025-04-11 17:58:39 +03:00
2d77fb3e30
Powers of 2 for token context limits. No -1 offset.
2025-04-11 17:40:53 +03:00
0c4b0cfb03
Set correct Mistral AI token context limits.
2025-04-11 17:20:39 +03:00
43caa0c6d4
Enable image inlining for visual models when connected to Mistral AI Le Platforme.
2025-04-11 11:09:10 +03:00
1c52099ed6
Add xAI as chat completion source
2025-04-10 22:59:10 +03:00
5929d5c0e4
Groq: sync supported models
2025-04-06 23:08:31 +03:00
18cc17142f
add support to the gemini 2.5 pro preview api
2025-04-06 17:10:24 +10:00
a2e3519218
Fix eslint
2025-03-27 20:54:06 +02:00
1639289b18
Merge pull request #3763 from qvink/empty_message_injection
...
Fix for generation interceptors messing with WI timed effects
2025-03-27 20:53:39 +02:00
de091daa40
Clean-up comments
2025-03-27 20:46:27 +02:00
2a31f6af2d
Remove Block Entropy references
...
Block Entropy shut down their service at the end of 2024.
2025-03-28 00:47:30 +09:00
dac5f6910c
adding comments
2025-03-27 09:40:42 -06:00