2040c43371
Revert "Powers of 2 for token context limits. No -1 offset."
...
This reverts commit 2d77fb3e30
.
2025-04-11 17:58:39 +03:00
2d77fb3e30
Powers of 2 for token context limits. No -1 offset.
2025-04-11 17:40:53 +03:00
0c4b0cfb03
Set correct Mistral AI token context limits.
2025-04-11 17:20:39 +03:00
43caa0c6d4
Enable image inlining for visual models when connected to Mistral AI Le Platforme.
2025-04-11 11:09:10 +03:00
5929d5c0e4
Groq: sync supported models
2025-04-06 23:08:31 +03:00
18cc17142f
add support to the gemini 2.5 pro preview api
2025-04-06 17:10:24 +10:00
a2e3519218
Fix eslint
2025-03-27 20:54:06 +02:00
1639289b18
Merge pull request #3763 from qvink/empty_message_injection
...
Fix for generation interceptors messing with WI timed effects
2025-03-27 20:53:39 +02:00
de091daa40
Clean-up comments
2025-03-27 20:46:27 +02:00
2a31f6af2d
Remove Block Entropy references
...
Block Entropy shut down their service at the end of 2024.
2025-03-28 00:47:30 +09:00
dac5f6910c
adding comments
2025-03-27 09:40:42 -06:00
1dcd837eb1
lint
2025-03-27 09:38:14 -06:00
f1a053c3b8
using symbols instead
2025-03-27 09:31:52 -06:00
251d242a0d
custom flag in message.extra, also apply to chat completion
2025-03-26 13:10:50 -06:00
264d77414a
Gemini 2.5 Pro
2025-03-25 21:21:23 +02:00
0b937237c3
Refactor getStreamingReply to use nullish coalescing for show_thoughts
2025-03-22 18:27:49 +02:00
7df6a78f33
Better overrideShowThoughts value
2025-03-21 22:19:13 +03:00
17e0058763
Changed options type
2025-03-21 22:16:57 +03:00
7619396053
Better naming
2025-03-21 22:09:41 +03:00
ec474f5571
Added stream support to "custom-request"
2025-03-21 20:44:09 +03:00
46d5f79fd9
OpenRouter: Allow applying prompt post-processing
...
Fixes #3689
2025-03-18 21:33:11 +02:00
b6c1c9a40d
MistralAI: Add new models
2025-03-18 19:53:02 +02:00
49949f2f8e
Merge pull request #3705 from Yokayo/staging
...
Update ru-ru translation
2025-03-18 00:01:15 +02:00
e7c9960a45
Fix
2025-03-18 02:15:13 +07:00
8dc66bd21b
Better WI type readability, fixed clone and type opeators.
2025-03-17 00:13:39 +03:00
0e41db615e
New exports
2025-03-16 23:44:02 +03:00
fff1dd59c3
minor 4.5 detection tweak
2025-03-15 19:27:42 -05:00
0c7d5c76e2
gemini-2.0-flash-exp-image-generation
2025-03-15 14:58:06 +02:00
f607c3bc0d
Gemma 3 ( #3686 )
...
* Gemma 3
* Adjust safetySettings
* Disable sysprompt
* Add isGemma check to tool processing logic
* Disable a google search tool for gemma
2025-03-14 21:41:28 +02:00
0017358f8b
Gemini inline images ( #3681 )
...
* Gemini images for non-streaming
* Parse images on stream
* Add toggle for image request
* Add extraction params to extractImageFromData
* Add explicit break and return
* Add more JSdoc to processImageAttachment
* Add file name prefix
* Add object argument for saveReply
* Add defaults to saveReply params
* Use type for saveReply result
* Change type check in saveReply backward compat
2025-03-14 20:15:04 +02:00
f362f94c2d
Decrease connection timeout, set 'valid' status on 'invalid URL', don't wait if not needed
...
Fixes #3683
2025-03-14 10:43:00 +02:00
e60796548b
Skip status check of invalid custom endpoint URLs
...
Fixes #3683
2025-03-14 01:40:38 +02:00
a77f4045f8
Added command-a-03-2025 and command-a tokenizer
2025-03-13 21:16:08 +03:00
070de9df2d
(CC) Move continue nudge at the end of completion ( #3611 )
...
* Move continue nudge at the end of completion
Closes #3607
* Move continue message together with nudge
2025-03-09 18:17:02 +02:00
98f92f6270
Fix syntax of model name check
2025-03-08 21:50:39 +02:00
5d275998ed
Merge branch 'staging' into patch-7
2025-03-08 21:46:38 +02:00
c3b5382882
Re-enable logit bias and stop strings for 4.5
2025-03-08 12:57:11 -06:00
ff5835278b
Add Jamba 1.6 models
...
Closes #3633
2025-03-08 15:16:49 +02:00
1a52314812
MistalAI: Add custom stop strings
...
Closes #3627
2025-03-07 11:29:14 +00:00
e9cf606c70
Add backend-provided websearch connectors for OpenRouter and Gemini
2025-03-06 22:23:35 +02:00
782d866fcf
Added Aya Vision support
2025-03-05 13:39:16 +09:00
7d568dd4e0
Generic generate methods ( #3566 )
...
* sendOpenAIRequest/getTextGenGenerationData methods are improved, now it can use custom API, instead of active ones
* Added missing model param
* Removed unnecessary variable
* active_oai_settings -> settings
* settings -> textgenerationwebui_settings
* Better presetToSettings names, simpler settings name in getTextGenGenerationData,
* Removed unused jailbreak_system
* Reverted most core changes, new custom-request.js file
* Forced stream to false, removed duplicate method, exported settingsToUpdate
* Rewrite typedefs to define props one by one
* Added extractData param for simplicity
* Fixed typehints
* Fixed typehints (again)
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com >
2025-03-03 10:30:20 +02:00
ea643cce6e
Add gpt-4.5-preview
2025-02-27 22:08:29 +01:00
5955327f7b
Add old preset name to OAI_PRESET_CHANGED_BEFORE
2025-02-25 17:32:36 +01:00
7064ce81c7
Exported getChatCompletionModel with optional parameter
2025-02-25 15:45:35 +03:00
b8ebed0f4c
Claude 3.7 think mode
2025-02-24 23:43:13 +02:00
82b74628c6
Sonnet 3.7
2025-02-24 21:06:12 +02:00
10a72b8c80
Merge branch 'staging' of github.com-qvink:SillyTavern/SillyTavern into get_chat_completion_presets_from_preset_manager
2025-02-23 21:24:13 -07:00
7eff895e88
Allowing the presetManager to return presets for chat completion
2025-02-23 21:23:43 -07:00
e7d38d95d0
Add max context size for llama-guard-3-8b model
2025-02-22 14:37:53 +02:00