Commit Graph

3863 Commits

Author SHA1 Message Date
Cohee
eaeafde0e4 Use Readability to extract text from HTML 2024-02-29 16:37:52 +02:00
Cohee
a2ac659056 Add step to aphro multiswipe control 2024-02-29 15:02:16 +02:00
Cohee
3d84ae026d Fix formatting 2024-02-29 11:50:41 +02:00
Cohee
8981346360
Merge pull request #1861 from berbant/staging
Deleting the current chat when creating a new one
2024-02-29 11:47:05 +02:00
Cohee
e8985c259c Merge branch 'EugeoSynthesisThirtyTwo/release' into staging 2024-02-29 11:34:38 +02:00
Cohee
184fd1622f Limit to ooba only. Exclude from presets 2024-02-29 11:33:47 +02:00
Cohee
d8956d3e17 Merge branch 'release' into staging 2024-02-29 11:24:40 +02:00
gabriel dhimoila
76669ff8bb add max_tokens_second 2024-02-29 00:55:25 +01:00
berbant
a85a2bbab1
Merge branch 'SillyTavern:staging' into staging 2024-02-28 22:46:43 +04:00
Cohee
d024d7c700 Allow max value for per-entry depth 2024-02-27 23:34:07 +02:00
Cohee
6f5dbc2a52
Merge pull request #1866 from SillyTavern/staging
Staging
2024-02-27 21:11:16 +02:00
Cohee
0fcb176408 Bump package version 2024-02-27 21:10:19 +02:00
Cohee
31f39e30c2
Merge pull request #1864 from Fyphen1223/release
Update Japanese translation
2024-02-27 20:31:26 +02:00
Cohee
5a236fbccb
Merge pull request #1865 from deciare/edit-message-macros
Parse macros when updating message
2024-02-27 20:25:17 +02:00
Deciare
2a4b8ac438 Update displayed prompt bias when auto-saving edits.
When Auto-save Message Edits is enabled, the prompt bias string
displayed beneath the textarea wasn't being updated.
2024-02-27 05:17:38 -05:00
Deciare
7885f19e86 Perform macro substitution while updating message.
This addresses 3 issues:
1. Prompt bias string was not removed from the text of the edited
  message.
2. Macro substitition was not performed in the prompt bias string.
3. Macro substitution was not performed in the edited message text.
2024-02-27 05:17:31 -05:00
Fyphen
1898192d37
Update readme-ja_jp.md 2024-02-27 18:00:56 +09:00
Cohee
29c4334c46 #1859 Set keep_alive for ollama 2024-02-26 21:09:21 +02:00
Cohee
73886c9fff
Merge pull request #1863 from kingbased/mistral
mistral-large
2024-02-26 20:10:51 +02:00
based
149a65cf62 migrate model name in old presets to new naming scheme 2024-02-27 02:23:07 +10:00
based
617ae7d02c ??? 2024-02-27 01:42:22 +10:00
based
c58d0b2b94 subvers 2024-02-27 01:12:17 +10:00
based
e86fd08d0f update mistral models 2024-02-27 01:02:02 +10:00
Cohee
f962ad5c02 Add OpenRouter as a text completion source 2024-02-25 22:47:07 +02:00
berbant
3c620effaf
Update script.js 2024-02-25 21:19:28 +04:00
berbant
670f08fad2
Update group-chats.js
After deleting a group chat, the oldest chat became active. I've fixed it so that the most recent chat becomes active instead.
2024-02-25 21:11:56 +04:00
Cohee
9e5505a7d4 Autocomplete for WI automation IDs 2024-02-25 03:54:40 +02:00
Cohee
fc289126fa Add event type for text completion generation request settings ready 2024-02-24 21:45:33 +02:00
Cohee
d5bf9fc28c Non-streaming logprobs for Aphrodite 2024-02-24 20:53:23 +02:00
Cohee
d140b8d5be Parse non-streaming tabby logprobs 2024-02-24 20:10:53 +02:00
Cohee
3cedf64f66 Add autocomplete for WI inclusion groups 2024-02-24 19:04:44 +02:00
Cohee
0e357c191b Align label margins 2024-02-24 18:23:58 +02:00
Cohee
3441667336 #1853 Add WI/Script link by entry automation id 2024-02-24 17:22:51 +02:00
Cohee
7b8ac8f4c4 Properly use vector insert setting 2024-02-24 15:57:26 +02:00
Cohee
16833fc238
Merge pull request #1855 from deciare/llamacpp-sampler-order
Sampler order for llama.cpp server backend
2024-02-24 15:45:44 +02:00
Cohee
8848818d67 Fix dynatemp neutralization 2024-02-24 15:32:12 +02:00
Cohee
299bd9d563 Merge branch 'staging' into llamacpp-sampler-order 2024-02-24 15:10:58 +02:00
Cohee
13aebc623a
Merge pull request #1854 from deciare/llamacpp-probs
Request and display token probabilities from llama.cpp backend
2024-02-24 15:06:28 +02:00
Cohee
eaadfea639 Extend debounce duration of logprobs renderer 2024-02-24 15:03:57 +02:00
Cohee
9287ff18de Fix for non-streaming 2024-02-24 14:50:06 +02:00
Cohee
dab9bbb514
Merge pull request #1844 from infermaticAI/InfermaticAI
Add InfermaticAI as a text completion source
2024-02-24 14:28:09 +02:00
Deciare
445cbda02f If token probability is a logarithm it'll be < 0
No need to read settings to find out if llama.cpp backend is in use...
2024-02-24 00:13:33 -05:00
Deciare
9eba076ae4 Sampler order for llama.cpp server backend 2024-02-23 23:01:04 -05:00
Deciare
936fbac6c5 Merge remote-tracking branch 'origin/staging' into llamacpp-probs 2024-02-23 17:45:54 -05:00
Cohee
737a0bd3ae Fix purge extras and mistral vectors 2024-02-23 22:37:00 +02:00
Cohee
9b34ac1bde
Merge pull request #1852 from berbant/staging
Display TranslateProvider link
2024-02-23 21:43:59 +02:00
Cohee
cb536a7611 Save a list of safe to export secret keys 2024-02-23 21:41:54 +02:00
Cohee
82c5042bad Prevent extra loop iterations on buffer init 2024-02-23 21:23:44 +02:00
Cohee
4baefeba68 Extend per-entry scan depth limit, add warnings on overflow 2024-02-23 21:18:40 +02:00
Deciare
344b9eedbc Request token probabilities from llama.cpp backend
llama.cpp server token probabilities are given as values ranging from
0 to 1 instead of as logarithms.
2024-02-23 14:01:46 -05:00