Cohee
68bb616be3
Merge branch 'staging' into cleanup-sampler-order
2023-12-14 18:32:30 +02:00
Cohee
009fb99d95
Merge pull request #1521 from valadaptive/separate-altscale-endpoints
...
Move generate_altscale into its own module
2023-12-14 17:24:29 +02:00
Cohee
38a34bf1d5
Fix silly argument naming
2023-12-14 17:14:05 +02:00
Cohee
00687a9379
Merge branch 'staging' into separate-altscale-endpoints
2023-12-14 17:12:19 +02:00
Cohee
b74bf272fb
Merge pull request #1520 from valadaptive/separate-openai-endpoints
...
Separate chat completions API (OpenAI) endpoints
2023-12-14 17:08:23 +02:00
Cohee
b524870544
Fix AI21 icon styles
2023-12-14 16:56:39 +02:00
Cohee
40e15f5762
Fix conditional access to Palm response body
2023-12-14 16:18:10 +02:00
Cohee
2a5340232d
Move prompt converters to a separate module. Camelcase local variables and add missing JSDocs.
2023-12-14 16:00:17 +02:00
Cohee
348253fd98
Fix import path
2023-12-14 15:36:44 +02:00
Cohee
907dc610ab
Merge branch 'staging' into separate-openai-endpoints
2023-12-14 15:17:03 +02:00
Cohee
bc0c064948
Merge pull request #1529 from bdashore3/staging
2023-12-14 13:57:38 +02:00
valadaptive
0f25d51a53
Send Kobold sampler order as an array
2023-12-14 02:03:36 -05:00
kingbri
3d8160cf25
Server: Update CORS proxy body limit
...
The body-parser middleware only accepted 50mb of data, bump this
value to 200mb.
Signed-off-by: kingbri <bdashore3@proton.me>
2023-12-13 21:39:07 -05:00
Cohee
875760eadf
Merge pull request #1519 from valadaptive/separate-kobold-endpoints
...
Move Kobold endpoints into their own module
2023-12-14 02:15:41 +02:00
valadaptive
b55ea8df04
Move alt Scale generation to its own module
2023-12-13 18:54:12 -05:00
valadaptive
22e048b5af
Rename generate_altscale endpoint
2023-12-13 18:53:46 -05:00
valadaptive
dba66e756a
Move chat completions API endpoints to module
2023-12-13 18:53:22 -05:00
valadaptive
92bd766bcb
Rename chat completions endpoints
...
OpenAI calls this the "Chat Completions API", in contrast to their
previous "Text Completions API", so that's what I'm naming it; both
because other services besides OpenAI implement it, and to avoid
confusion with the existing /api/openai route used for OpenAI extras.
2023-12-13 18:52:08 -05:00
Cohee
796659f68c
Add proper fetch import
2023-12-14 01:39:34 +02:00
Cohee
c8bc9cf24c
Fix route name
2023-12-14 01:37:51 +02:00
Cohee
0cd92f13b4
Merge branch 'staging' into separate-kobold-endpoints
2023-12-14 01:33:36 +02:00
Cohee
cebd6e9e0f
Add API token ids from KoboldCpp
2023-12-14 01:28:18 +02:00
Cohee
b957e3b875
Merge pull request #1518 from valadaptive/separate-ooba-endpoints
...
Move Ooba/textgenerationwebui endpoints into their own module
2023-12-14 01:27:05 +02:00
Cohee
0d0dd5e170
Revert old comment
2023-12-13 02:50:50 +02:00
Cohee
52de5869fe
Rename file, add missing fetch
2023-12-13 02:22:35 +02:00
Cohee
51d50f97cc
Merge pull request #1525 from valadaptive/cache-stopping-strings
...
Cache stopping strings rather than skipping them during streaming
2023-12-13 01:06:44 +02:00
valadaptive
2c159ff93f
Move Kobold API endpoints to their own module
2023-12-12 16:42:12 -05:00
valadaptive
274605a07c
Rename Kobold-related endpoints
2023-12-12 16:42:12 -05:00
valadaptive
35c2f8bf66
Move text completions API endpoints to own module
2023-12-12 16:41:16 -05:00
valadaptive
5b3c96df50
Rename /textgenerationwebui endpoint
...
I'd like to migrate over to using "textgen" to mean text-generation APIs
in general, so I've renamed the /textgenerationwebui/* endpoints to
/backends/text-completions/*.
2023-12-12 16:40:14 -05:00
valadaptive
7732865e4c
Another explanatory comment
2023-12-12 16:36:47 -05:00
valadaptive
87cbe361fc
Cache stopping strings rather than skipping them
2023-12-12 16:32:54 -05:00
Cohee
3d7706e6b3
#1524 Skip stop strings clean-up during streaming
2023-12-12 23:09:39 +02:00
Cohee
83f2c1a8ed
#1524 Add FPS limiter to streamed rendering
2023-12-12 22:11:23 +02:00
Cohee
9160de7714
Run macros on impersonation prompt
2023-12-12 19:24:32 +02:00
Cohee
9176f46caf
Add /preset command
2023-12-12 19:14:17 +02:00
Cohee
2ca9015a5f
Add filters to serpapi/visit
2023-12-12 03:56:36 +02:00
Cohee
a9a05b17b9
Merge pull request #1517 from LenAnderson/firstIncludedMessageId
...
Add macro for first included message in context
2023-12-12 01:24:57 +02:00
Cohee
07fecacce2
Add to macro help
2023-12-12 01:24:21 +02:00
Cohee
f1ed60953a
Merge pull request #1516 from LenAnderson/slash-command-for-getTokenCount
...
Add /tokens slash command to call getTokenCount
2023-12-12 01:19:24 +02:00
Cohee
299749a4e7
Add prerequisites for websearch extension
2023-12-12 01:08:47 +02:00
LenAnderson
2bdd3672d4
add macro for first included message in context
2023-12-11 23:06:21 +00:00
LenAnderson
69f90a0b30
add /tokens slash command to call getTokenCount
2023-12-11 22:51:07 +00:00
Cohee
1b11ddc26a
Add vector storage to WI scanning
2023-12-11 22:47:26 +02:00
Cohee
afe3e824b1
Unblock left swipe on swipeId overflow.
2023-12-11 21:16:09 +02:00
Cohee
e713021737
Merge pull request #1511 from valadaptive/more-kobold-cleanups
...
More Kobold cleanups
2023-12-11 20:59:49 +02:00
Cohee
05ab147209
Fix swipes getting stuck when no Horde models selected
2023-12-11 20:46:34 +02:00
Cohee
27782b2f83
Fix united version comparison
2023-12-11 20:44:29 +02:00
valadaptive
ce8cc59e4d
Remove fetchJSON
2023-12-11 13:32:38 -05:00
Cohee
7482a75bbd
Merge pull request #1493 from valadaptive/generate-cleanups
...
Clean up Generate(), part 1
2023-12-11 20:21:32 +02:00