7619396053
Better naming
2025-03-21 22:09:41 +03:00
ec474f5571
Added stream support to "custom-request"
2025-03-21 20:44:09 +03:00
46d5f79fd9
OpenRouter: Allow applying prompt post-processing
...
Fixes #3689
2025-03-18 21:33:11 +02:00
b6c1c9a40d
MistralAI: Add new models
2025-03-18 19:53:02 +02:00
49949f2f8e
Merge pull request #3705 from Yokayo/staging
...
Update ru-ru translation
2025-03-18 00:01:15 +02:00
e7c9960a45
Fix
2025-03-18 02:15:13 +07:00
8dc66bd21b
Better WI type readability, fixed clone and type opeators.
2025-03-17 00:13:39 +03:00
0e41db615e
New exports
2025-03-16 23:44:02 +03:00
fff1dd59c3
minor 4.5 detection tweak
2025-03-15 19:27:42 -05:00
0c7d5c76e2
gemini-2.0-flash-exp-image-generation
2025-03-15 14:58:06 +02:00
f607c3bc0d
Gemma 3 ( #3686 )
...
* Gemma 3
* Adjust safetySettings
* Disable sysprompt
* Add isGemma check to tool processing logic
* Disable a google search tool for gemma
2025-03-14 21:41:28 +02:00
0017358f8b
Gemini inline images ( #3681 )
...
* Gemini images for non-streaming
* Parse images on stream
* Add toggle for image request
* Add extraction params to extractImageFromData
* Add explicit break and return
* Add more JSdoc to processImageAttachment
* Add file name prefix
* Add object argument for saveReply
* Add defaults to saveReply params
* Use type for saveReply result
* Change type check in saveReply backward compat
2025-03-14 20:15:04 +02:00
f362f94c2d
Decrease connection timeout, set 'valid' status on 'invalid URL', don't wait if not needed
...
Fixes #3683
2025-03-14 10:43:00 +02:00
e60796548b
Skip status check of invalid custom endpoint URLs
...
Fixes #3683
2025-03-14 01:40:38 +02:00
a77f4045f8
Added command-a-03-2025 and command-a tokenizer
2025-03-13 21:16:08 +03:00
070de9df2d
(CC) Move continue nudge at the end of completion ( #3611 )
...
* Move continue nudge at the end of completion
Closes #3607
* Move continue message together with nudge
2025-03-09 18:17:02 +02:00
98f92f6270
Fix syntax of model name check
2025-03-08 21:50:39 +02:00
5d275998ed
Merge branch 'staging' into patch-7
2025-03-08 21:46:38 +02:00
c3b5382882
Re-enable logit bias and stop strings for 4.5
2025-03-08 12:57:11 -06:00
ff5835278b
Add Jamba 1.6 models
...
Closes #3633
2025-03-08 15:16:49 +02:00
1a52314812
MistalAI: Add custom stop strings
...
Closes #3627
2025-03-07 11:29:14 +00:00
e9cf606c70
Add backend-provided websearch connectors for OpenRouter and Gemini
2025-03-06 22:23:35 +02:00
782d866fcf
Added Aya Vision support
2025-03-05 13:39:16 +09:00
7d568dd4e0
Generic generate methods ( #3566 )
...
* sendOpenAIRequest/getTextGenGenerationData methods are improved, now it can use custom API, instead of active ones
* Added missing model param
* Removed unnecessary variable
* active_oai_settings -> settings
* settings -> textgenerationwebui_settings
* Better presetToSettings names, simpler settings name in getTextGenGenerationData,
* Removed unused jailbreak_system
* Reverted most core changes, new custom-request.js file
* Forced stream to false, removed duplicate method, exported settingsToUpdate
* Rewrite typedefs to define props one by one
* Added extractData param for simplicity
* Fixed typehints
* Fixed typehints (again)
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com >
2025-03-03 10:30:20 +02:00
ea643cce6e
Add gpt-4.5-preview
2025-02-27 22:08:29 +01:00
5955327f7b
Add old preset name to OAI_PRESET_CHANGED_BEFORE
2025-02-25 17:32:36 +01:00
7064ce81c7
Exported getChatCompletionModel with optional parameter
2025-02-25 15:45:35 +03:00
b8ebed0f4c
Claude 3.7 think mode
2025-02-24 23:43:13 +02:00
82b74628c6
Sonnet 3.7
2025-02-24 21:06:12 +02:00
10a72b8c80
Merge branch 'staging' of github.com-qvink:SillyTavern/SillyTavern into get_chat_completion_presets_from_preset_manager
2025-02-23 21:24:13 -07:00
7eff895e88
Allowing the presetManager to return presets for chat completion
2025-02-23 21:23:43 -07:00
e7d38d95d0
Add max context size for llama-guard-3-8b model
2025-02-22 14:37:53 +02:00
15769a7643
Add context sizes for new groq models
2025-02-22 14:36:32 +02:00
5c79c8e162
[chore] Reformat new code
2025-02-22 12:47:19 +02:00
13f76c974e
reasoning or reasoning_content
2025-02-22 16:09:42 +08:00
6e5db5c41a
Perplexity: Add new models
2025-02-21 23:03:49 +02:00
5de2f8ea2d
Add o1 to vision-supported models
2025-02-13 02:40:34 +01:00
34db46d84b
Merge branch 'staging' into hidden-reasoning-tracking
2025-02-12 20:00:52 +02:00
d1018a824c
Merge branch 'staging' into hidden-reasoning-tracking
2025-02-11 23:45:13 +02:00
d5bdf1cb90
Add settings.json-backed KV string storage
...
Fixes #3461 , #3443
2025-02-11 20:17:48 +02:00
c3dd3e246e
DeepSeek: Add tool calling for -chat model
2025-02-11 00:04:40 +02:00
703e876f4a
Fix and shorten isHiddenReasoningModel
2025-02-10 06:15:47 +01:00
c886de5deb
Move isHiddenReasoningModel
2025-02-08 18:17:38 +02:00
c8e05a34d6
Add gemini pro to hidden thinking models
2025-02-08 01:48:14 +01:00
d94ac48b65
Add thinking time for hidden reasoning models
...
- Streamline reasoning UI update functionality
- Add helper function to identify hidden reasoning models
- Fix/update reasoning time calculation to actually utilize start gen time
- Fix reasoning UI update on swipe
- add CSS class for hidden reasoning blocks (to make it possible to hide for users)
2025-02-08 00:45:33 +01:00
a2aef5ea4a
Use array.includes
2025-02-07 22:04:21 +02:00
cc401b2c9d
Whelp, seems like o1 main still no streaming
2025-02-07 20:12:10 +01:00
95a31cdd98
Remove logit bias from o1 and o3
...
- They do not be supporting it anymore
2025-02-07 19:51:21 +01:00
d1ec9eb8ab
Enabled streaming for o1 and o3
...
- They do be supporting it now
2025-02-07 19:50:01 +01:00
b074f9fa89
feat: update Gemini models
...
- Add new Gemini models (2025/02/05)
2025-02-06 04:50:54 +08:00