1694 Commits

Author SHA1 Message Date
LenAnderson
0519629b70 fix autocomplete help text 2023-12-14 18:54:42 +00:00
LenAnderson
dbf28fce47 cleanup autocomplete help text 2023-12-14 18:52:23 +00:00
LenAnderson
5e3584d5ed add slash command to create QR preset 2023-12-14 18:51:55 +00:00
LenAnderson
90ec6b9159 add slash commands for context menus 2023-12-14 18:25:19 +00:00
LenAnderson
3e44e4240c handle escapes for pipes and curcly brackets 2023-12-14 18:09:33 +00:00
LenAnderson
5e61ff8d05 fix help string 2023-12-14 16:11:03 +00:00
LenAnderson
372ef2172e add slash commands to crud QRs 2023-12-14 16:00:38 +00:00
valadaptive
22e048b5af Rename generate_altscale endpoint 2023-12-13 18:53:46 -05:00
valadaptive
92bd766bcb Rename chat completions endpoints
OpenAI calls this the "Chat Completions API", in contrast to their
previous "Text Completions API", so that's what I'm naming it; both
because other services besides OpenAI implement it, and to avoid
confusion with the existing /api/openai route used for OpenAI extras.
2023-12-13 18:52:08 -05:00
Cohee
0cd92f13b4 Merge branch 'staging' into separate-kobold-endpoints 2023-12-14 01:33:36 +02:00
Cohee
cebd6e9e0f Add API token ids from KoboldCpp 2023-12-14 01:28:18 +02:00
valadaptive
274605a07c Rename Kobold-related endpoints 2023-12-12 16:42:12 -05:00
valadaptive
5b3c96df50 Rename /textgenerationwebui endpoint
I'd like to migrate over to using "textgen" to mean text-generation APIs
in general, so I've renamed the /textgenerationwebui/* endpoints to
/backends/text-completions/*.
2023-12-12 16:40:14 -05:00
Cohee
83f2c1a8ed #1524 Add FPS limiter to streamed rendering 2023-12-12 22:11:23 +02:00
Cohee
9160de7714 Run macros on impersonation prompt 2023-12-12 19:24:32 +02:00
Cohee
9176f46caf Add /preset command 2023-12-12 19:14:17 +02:00
Cohee
a9a05b17b9
Merge pull request #1517 from LenAnderson/firstIncludedMessageId
Add macro for first included message in context
2023-12-12 01:24:57 +02:00
Cohee
07fecacce2 Add to macro help 2023-12-12 01:24:21 +02:00
Cohee
f1ed60953a
Merge pull request #1516 from LenAnderson/slash-command-for-getTokenCount
Add /tokens slash command to call getTokenCount
2023-12-12 01:19:24 +02:00
Cohee
299749a4e7 Add prerequisites for websearch extension 2023-12-12 01:08:47 +02:00
LenAnderson
69f90a0b30 add /tokens slash command to call getTokenCount 2023-12-11 22:51:07 +00:00
Cohee
1b11ddc26a Add vector storage to WI scanning 2023-12-11 22:47:26 +02:00
Cohee
e713021737
Merge pull request #1511 from valadaptive/more-kobold-cleanups
More Kobold cleanups
2023-12-11 20:59:49 +02:00
Cohee
27782b2f83 Fix united version comparison 2023-12-11 20:44:29 +02:00
Cohee
7482a75bbd
Merge pull request #1493 from valadaptive/generate-cleanups
Clean up Generate(), part 1
2023-12-11 20:21:32 +02:00
Cohee
d38a4dc6c1 Fix abort group generation 2023-12-11 20:03:31 +02:00
Cohee
0302686a96 Return from Generate if calling circuit breaker 2023-12-11 19:07:33 +02:00
Cohee
e96fb0c1b5 Fix group wrapper not resolving to a valid text 2023-12-11 19:00:42 +02:00
Cohee
0fcf8fd491 Typing indicator fixed 2023-12-11 18:23:00 +02:00
Cohee
17105568f4 Reduce hard coded anim. durations 2023-12-11 16:23:21 +02:00
Cohee
e7c2975ab0 Fix adv.defs overlap with past chats. Close CFG with Escape 2023-12-11 15:39:58 +02:00
Cohee
c6bd3ef255 Fix /sys continue in groups 2023-12-11 15:08:20 +02:00
valadaptive
42d4ffe5e8 Remove Kobold "canUse(...)" functions
Replace them all with a versionCompare helper function which we can call
directly with the minimum version constants.
2023-12-10 20:39:21 -05:00
valadaptive
d33cb0d8d1 Clarify getstatus API
Instead of "version" and "koboldVersion", have "koboldUnitedVersion" and
"koboldCppVersion", the latter of which is null if we're not connected
to KoboldCpp.
2023-12-10 20:34:11 -05:00
Cohee
7450112e9b Unbust user continue in group chats 2023-12-11 00:02:49 +02:00
Cohee
05b08f1ce2 Don't await delay promise 2023-12-10 21:51:16 +02:00
Cohee
2e50efc35c Limit waiting for TTS to init to 1 second on chat change 2023-12-10 21:50:52 +02:00
Cohee
420d186823 Add reduced motion toggle 2023-12-10 20:02:25 +02:00
valadaptive
33f969f097 Have Generate() return a promise
Generate(), being async, now returns a promise-within-a-promise.
If called with `let p = await Generate(...)`, it'll wait for generation
to *start*. If you then `await p`, you'll wait for generation to
*finish*. This makes it much easier to tell exactly when generation's
done. generateGroupWrapper has been similarly modified.
2023-12-10 12:30:10 -05:00
Cohee
13e016f3e5
Merge pull request #1508 from LenAnderson/tts-skip-codeblocks-option
add TTS option to skip codeblock narration
2023-12-10 19:28:16 +02:00
valadaptive
03884b29ad Always call resolve in Generate()
This lets us get rid of the janky hack in group-chats to tell when a
message is done generating.
2023-12-10 12:26:30 -05:00
valadaptive
f5d2e50f5e Remove isGenerationAborted
Just check the AbortSignal.
2023-12-10 12:24:18 -05:00
Cohee
dbd52a7994
Merge pull request #1482 from valadaptive/sse-stream
Refactor server-sent events parsing
2023-12-10 18:32:19 +02:00
LenAnderson
bf88829b03 add option to skip codeblock narration 2023-12-10 16:32:10 +00:00
Cohee
af89cfa870 Code clean-up 2023-12-10 16:48:25 +02:00
Cohee
5054de247b Merge branch 'staging' into qr-editor-tab-support 2023-12-10 16:36:28 +02:00
Cohee
9acef0fae6 Horde doesn't support API tokenizers 2023-12-10 16:21:06 +02:00
Cohee
f54bf99006 Fix token ids not displaying in "API_CURRENT" mode for TextGen 2023-12-10 16:09:00 +02:00
Cohee
6957d9e7cf Fix display names of Best match tokenizers 2023-12-10 16:03:25 +02:00
Cohee
6e5eea5dba Unbreak previously selected API tokenizer in dropdown 2023-12-10 15:56:38 +02:00