c69724e1da
Fix GUI Kobold
2024-01-02 10:28:34 +02:00
52637ccd39
Merge pull request #1619 from LenAnderson/worldinfo_updated-event
...
Add event when world info is updated
2024-01-01 18:35:23 +02:00
f53d937782
Fix mistral undefined name
2024-01-01 18:31:17 +02:00
9106696f2f
Render prompt manager when switching APIs
2024-01-01 17:06:10 +02:00
908bf7a61d
Merge branch 'staging' into generate-cleanups-3
2024-01-01 16:49:35 +02:00
8cd75cf03d
add event when world info is updated
2024-01-01 14:34:09 +00:00
30732ada32
Lint fix
2024-01-01 16:08:24 +02:00
ee70593a7e
Add world info to generate_before_combine_prompts event data
2023-12-28 17:03:36 +01:00
8dd4543e93
Remove macro from user messages when using bias
2023-12-28 11:19:56 +02:00
77b02a8d4b
Extract data.error check
2023-12-26 12:41:35 -05:00
0f8a16325b
Extract dryRun early return from finishGenerating
...
This means we only have to handle it in one place rather than two.
2023-12-25 03:48:49 -05:00
3c0207f6cb
Move "continue on send" logic out of Generate()
2023-12-25 03:48:49 -05:00
7899549754
Make "send message from chat box" into a function
...
Right now all it does is handle returning if there's already a message
being generated, but I'll extend it with more logic that I want to move
out of Generate().
2023-12-25 03:48:49 -05:00
1029ad90a2
Extract "not in a chat" check into guard clause
...
This lets us remove a layer of indentation, and reveal the error
handling logic that was previously hidden below a really long block of
code.
2023-12-25 03:48:49 -05:00
4fc2f15448
Reformat up Generate() group logic
...
The first two conditions in the group if/else blocks are the same, so we
can combine them.
2023-12-25 03:48:49 -05:00
0d3505c44b
Remove OAI_BEFORE_CHATCOMPLETION
...
Not used in any internal code or extensions I can find.
2023-12-25 03:48:49 -05:00
f53e051cbf
Lift precondition check out of processCommands
...
Instead of passing type and dryRun into processCommands, do the check in
Generate, the only function that calls it. This makes the logic clearer.
2023-12-25 03:48:49 -05:00
a9e074dae1
Don't recreate first message if generation was run at least once
2023-12-24 02:47:00 +02:00
db3bf42d63
Fix Firefox number arrows not updating the slider
2023-12-23 16:09:03 +02:00
09fd772a20
#1579 Add ooba character yaml import
2023-12-21 21:46:09 +02:00
4621834c87
Short formatting path for empty messages
2023-12-21 20:50:30 +02:00
a85a6cf606
Allow displaying unreferenced macro in message texts
2023-12-21 20:49:03 +02:00
39e0b0f5cb
Remove custom Handlebars helpers for extensions.
2023-12-21 20:33:50 +02:00
8fb26284e2
Clean up Generate(), part 2 ( #1578 )
...
* Move StreamingProcessor constructor to the top
Typical code style is to declare the constructor at the top of the class
definition.
* Remove removePrefix
cleanupMessage does this already.
* Make message_already_generated local
We can pass it into StreamingProcessor so it doesn't have to be a global
variable.
* Consolidate setting isStopped and abort signal
Various places were doing some combination of setting isStopped, calling
abort on the streaming processor's abort controller, and calling
onStopStreaming. Let's consolidate all that functionality into
onStopStreaming/onErrorStreaming.
* More cleanly separate streaming/nonstreaming paths
* Replace promise with async function w/ handlers
By using onSuccess and onError as promise handlers, we can use normal
control flow and don't need to remember to use try/catch blocks or call
onSuccess every time.
* Remove runGenerate
Placing the rest of the code in a separate function doesn't really do
anything for its structure.
* Move StreamingProcessor() into streaming code path
* Fix return from circuit breaker
* Fix non-streaming chat completion request
* Fix Horde generation and quiet unblocking
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com >
2023-12-21 20:20:28 +02:00
b3dfe16706
#1575 Fix clean-up WI depth injections
2023-12-21 16:33:21 +02:00
ee75adbd2d
Update persona name if it is bound by user name input
2023-12-21 14:56:32 +02:00
cf8d7e7d35
Merge branch 'staging' into custom
2023-12-20 18:37:47 +02:00
ebec26154c
Welcome message fixed
2023-12-20 18:37:34 +02:00
5734dbd17c
Add custom endpoint type
2023-12-20 18:29:03 +02:00
041b9d4b01
Add style sanitizer to message renderer
2023-12-20 17:03:37 +02:00
b0a4341571
Merge pull request #1574 from artisticMink/feature/before-combine-event
...
Allow extensions to alter the context order.
2023-12-20 15:46:34 +02:00
f30f75b310
Add GENERATE_BEFORE_COMBINE_PROMPTS event
...
Allows for context to be ordered by extensions
2023-12-19 19:11:36 +01:00
67dd52c21b
#1309 Ollama text completion backend
2023-12-19 16:38:11 +02:00
edd737e8bd
#371 Add llama.cpp inference server support
2023-12-18 22:38:28 +02:00
b0d9f14534
Re-add Together as a text completion source
2023-12-17 23:38:03 +02:00
180061337e
Merge branch 'staging' into anachronous/release
2023-12-17 21:35:49 +02:00
fb25a90532
add GENERATION_STARTED event
2023-12-17 17:45:23 +00:00
1e88c8922a
Merge branch 'staging' into release
2023-12-17 10:38:04 +01:00
9f2d32524c
Add TogetherAI as a chat completion source, basic
2023-12-16 14:39:30 +01:00
583f786d74
finish mistral frontend integration + apikey status check
2023-12-16 07:15:57 +10:00
ef17702f6a
Merge branch 'staging' into bg-load-improvements
2023-12-15 17:02:10 +02:00
6c16b94f9d
Merge pull request #1540 from valadaptive/refactor-device-check
...
Refactor mobile device check
2023-12-15 17:01:32 +02:00
0ee19d2ede
Set background client-side
2023-12-15 05:45:21 -05:00
7897206cf8
Add a pre-loading screen cover
...
This matches the loader color and exists to prevent a flash of unstyled
content when the page first loads and JS has not yet run.
2023-12-15 05:34:33 -05:00
fbdfa05f81
Replace usage of getDeviceInfo with isMobile
...
We were using getDeviceInfo to check whether we were on a desktop or a
mobile device. This can be done more simply with isMobile, which means
we can stop exporting getDeviceInfo.
2023-12-14 18:37:54 -05:00
769cc0a78f
Rename settings API endpoints
2023-12-14 16:47:03 -05:00
cde9903fcb
Fix Bison models
2023-12-14 22:18:34 +02:00
ca87f29771
added streaming for google models
2023-12-14 21:03:41 +10:00
be396991de
finish implementing ui changes for google models
2023-12-14 11:53:26 +10:00
69e24c9686
change palm naming in UI
2023-12-14 11:14:41 +10:00