Cohee
b9392893dc
[FEATURE_REQUEST] Option to toggle disable instruct formatting for example dialogue insertion #1881
2024-03-03 19:12:20 +02:00
Cohee
314c52fa5f
Merge pull request #1885 from parsedone/patch-1
...
Fixes [BUG] STscript /fuzzy returning wrong answer
2024-03-03 16:07:12 +02:00
Cohee
975206fd06
Clean-up /fuzzy command doc comments
2024-03-03 16:04:48 +02:00
Cohee
39c588f30e
Showdown: parse single underscores as italics
2024-03-03 15:26:29 +02:00
RossAscends
77791ae3e9
revamp creator note & spoiler hide/show
2024-03-03 18:55:16 +09:00
parsedone
88f42132c2
Update slash-commands.js [BUG] STscript /fuzzy returning wrong answer
...
Implements fix of the bug #1883 "[BUG] STscript /fuzzy returning wrong answer".
Fix the params so tha /fuzzy detect when a "candidate" item is found (using fuzzy matching) in the text passed without argument name.
Also added optional "threshold" that allows to change the value used by Fuse in order to have stricter or looselier matching.
Also updated the parser.addCommand('fuzzy', fuzzyCallback
2024-03-03 03:43:44 +01:00
Cohee
b490978142
Refactor vector models code
2024-03-02 23:16:18 +02:00
Kristan Schlikow
adfb9c5097
Implement TogetherAI as vectorization provider
2024-03-01 23:52:49 +01:00
Cohee
95c49029f7
Add aphrodite model selector
2024-03-01 23:02:43 +02:00
Cohee
d1ca855d23
Debounce token counting in popup plugin
2024-03-01 21:42:49 +02:00
Mae Thomson
63cd8b98dd
Fix broken avatar thumbnail upon deleting last member of a group
2024-03-01 11:06:23 -05:00
Cohee
b716dfbc0d
Merge pull request #1874 from deciare/underline-text-format
...
Support underlined text formatting
2024-03-01 17:49:14 +02:00
Deciare
d554edc023
Support underlined text formatting.
...
- Enable the `underline` option for Showdown.
- Implement option for underlined text colour.
- Update stylesheet.
2024-03-01 00:35:27 -05:00
Cohee
eaeafde0e4
Use Readability to extract text from HTML
2024-02-29 16:37:52 +02:00
Cohee
3d84ae026d
Fix formatting
2024-02-29 11:50:41 +02:00
Cohee
8981346360
Merge pull request #1861 from berbant/staging
...
Deleting the current chat when creating a new one
2024-02-29 11:47:05 +02:00
Cohee
e8985c259c
Merge branch 'EugeoSynthesisThirtyTwo/release' into staging
2024-02-29 11:34:38 +02:00
Cohee
184fd1622f
Limit to ooba only. Exclude from presets
2024-02-29 11:33:47 +02:00
gabriel dhimoila
76669ff8bb
add max_tokens_second
2024-02-29 00:55:25 +01:00
berbant
a85a2bbab1
Merge branch 'SillyTavern:staging' into staging
2024-02-28 22:46:43 +04:00
Cohee
d024d7c700
Allow max value for per-entry depth
2024-02-27 23:34:07 +02:00
based
149a65cf62
migrate model name in old presets to new naming scheme
2024-02-27 02:23:07 +10:00
Cohee
f962ad5c02
Add OpenRouter as a text completion source
2024-02-25 22:47:07 +02:00
berbant
670f08fad2
Update group-chats.js
...
After deleting a group chat, the oldest chat became active. I've fixed it so that the most recent chat becomes active instead.
2024-02-25 21:11:56 +04:00
Cohee
9e5505a7d4
Autocomplete for WI automation IDs
2024-02-25 03:54:40 +02:00
Cohee
fc289126fa
Add event type for text completion generation request settings ready
2024-02-24 21:45:33 +02:00
Cohee
d5bf9fc28c
Non-streaming logprobs for Aphrodite
2024-02-24 20:53:23 +02:00
Cohee
d140b8d5be
Parse non-streaming tabby logprobs
2024-02-24 20:10:53 +02:00
Cohee
3cedf64f66
Add autocomplete for WI inclusion groups
2024-02-24 19:04:44 +02:00
Cohee
3441667336
#1853 Add WI/Script link by entry automation id
2024-02-24 17:22:51 +02:00
Cohee
7b8ac8f4c4
Properly use vector insert setting
2024-02-24 15:57:26 +02:00
Cohee
8848818d67
Fix dynatemp neutralization
2024-02-24 15:32:12 +02:00
Cohee
299bd9d563
Merge branch 'staging' into llamacpp-sampler-order
2024-02-24 15:10:58 +02:00
Cohee
13aebc623a
Merge pull request #1854 from deciare/llamacpp-probs
...
Request and display token probabilities from llama.cpp backend
2024-02-24 15:06:28 +02:00
Cohee
eaadfea639
Extend debounce duration of logprobs renderer
2024-02-24 15:03:57 +02:00
Cohee
9287ff18de
Fix for non-streaming
2024-02-24 14:50:06 +02:00
Cohee
dab9bbb514
Merge pull request #1844 from infermaticAI/InfermaticAI
...
Add InfermaticAI as a text completion source
2024-02-24 14:28:09 +02:00
Deciare
445cbda02f
If token probability is a logarithm it'll be < 0
...
No need to read settings to find out if llama.cpp backend is in use...
2024-02-24 00:13:33 -05:00
Deciare
9eba076ae4
Sampler order for llama.cpp server backend
2024-02-23 23:01:04 -05:00
Deciare
936fbac6c5
Merge remote-tracking branch 'origin/staging' into llamacpp-probs
2024-02-23 17:45:54 -05:00
Cohee
737a0bd3ae
Fix purge extras and mistral vectors
2024-02-23 22:37:00 +02:00
Cohee
9b34ac1bde
Merge pull request #1852 from berbant/staging
...
Display TranslateProvider link
2024-02-23 21:43:59 +02:00
Cohee
cb536a7611
Save a list of safe to export secret keys
2024-02-23 21:41:54 +02:00
Cohee
82c5042bad
Prevent extra loop iterations on buffer init
2024-02-23 21:23:44 +02:00
Cohee
4baefeba68
Extend per-entry scan depth limit, add warnings on overflow
2024-02-23 21:18:40 +02:00
Deciare
344b9eedbc
Request token probabilities from llama.cpp backend
...
llama.cpp server token probabilities are given as values ranging from
0 to 1 instead of as logarithms.
2024-02-23 14:01:46 -05:00
berbant
eb89337f51
Update index.js
2024-02-22 23:49:47 +04:00
Cohee
c9f0d61f19
#1851 Substitute macros in new example chat
2024-02-22 18:45:50 +02:00
NWilson
f569424f3e
Merge branch 'staging' into InfermaticAI
2024-02-22 08:32:10 -06:00
Cohee
ece3b2a7c1
Fix Chat Completions status check on settings loading if another API is selected
2024-02-22 04:36:06 +02:00