valadaptive
3c0207f6cb
Move "continue on send" logic out of Generate()
2023-12-25 03:48:49 -05:00
valadaptive
7899549754
Make "send message from chat box" into a function
...
Right now all it does is handle returning if there's already a message
being generated, but I'll extend it with more logic that I want to move
out of Generate().
2023-12-25 03:48:49 -05:00
valadaptive
1029ad90a2
Extract "not in a chat" check into guard clause
...
This lets us remove a layer of indentation, and reveal the error
handling logic that was previously hidden below a really long block of
code.
2023-12-25 03:48:49 -05:00
valadaptive
4fc2f15448
Reformat up Generate() group logic
...
The first two conditions in the group if/else blocks are the same, so we
can combine them.
2023-12-25 03:48:49 -05:00
valadaptive
0d3505c44b
Remove OAI_BEFORE_CHATCOMPLETION
...
Not used in any internal code or extensions I can find.
2023-12-25 03:48:49 -05:00
valadaptive
f53e051cbf
Lift precondition check out of processCommands
...
Instead of passing type and dryRun into processCommands, do the check in
Generate, the only function that calls it. This makes the logic clearer.
2023-12-25 03:48:49 -05:00
Cohee
a9e074dae1
Don't recreate first message if generation was run at least once
2023-12-24 02:47:00 +02:00
Cohee
db3bf42d63
Fix Firefox number arrows not updating the slider
2023-12-23 16:09:03 +02:00
Cohee
09fd772a20
#1579 Add ooba character yaml import
2023-12-21 21:46:09 +02:00
Cohee
4621834c87
Short formatting path for empty messages
2023-12-21 20:50:30 +02:00
Cohee
a85a6cf606
Allow displaying unreferenced macro in message texts
2023-12-21 20:49:03 +02:00
Cohee
39e0b0f5cb
Remove custom Handlebars helpers for extensions.
2023-12-21 20:33:50 +02:00
valadaptive
8fb26284e2
Clean up Generate(), part 2 ( #1578 )
...
* Move StreamingProcessor constructor to the top
Typical code style is to declare the constructor at the top of the class
definition.
* Remove removePrefix
cleanupMessage does this already.
* Make message_already_generated local
We can pass it into StreamingProcessor so it doesn't have to be a global
variable.
* Consolidate setting isStopped and abort signal
Various places were doing some combination of setting isStopped, calling
abort on the streaming processor's abort controller, and calling
onStopStreaming. Let's consolidate all that functionality into
onStopStreaming/onErrorStreaming.
* More cleanly separate streaming/nonstreaming paths
* Replace promise with async function w/ handlers
By using onSuccess and onError as promise handlers, we can use normal
control flow and don't need to remember to use try/catch blocks or call
onSuccess every time.
* Remove runGenerate
Placing the rest of the code in a separate function doesn't really do
anything for its structure.
* Move StreamingProcessor() into streaming code path
* Fix return from circuit breaker
* Fix non-streaming chat completion request
* Fix Horde generation and quiet unblocking
---------
Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2023-12-21 20:20:28 +02:00
Cohee
b3dfe16706
#1575 Fix clean-up WI depth injections
2023-12-21 16:33:21 +02:00
Cohee
ee75adbd2d
Update persona name if it is bound by user name input
2023-12-21 14:56:32 +02:00
Cohee
cf8d7e7d35
Merge branch 'staging' into custom
2023-12-20 18:37:47 +02:00
Cohee
ebec26154c
Welcome message fixed
2023-12-20 18:37:34 +02:00
Cohee
5734dbd17c
Add custom endpoint type
2023-12-20 18:29:03 +02:00
Cohee
041b9d4b01
Add style sanitizer to message renderer
2023-12-20 17:03:37 +02:00
Cohee
b0a4341571
Merge pull request #1574 from artisticMink/feature/before-combine-event
...
Allow extensions to alter the context order.
2023-12-20 15:46:34 +02:00
maver
f30f75b310
Add GENERATE_BEFORE_COMBINE_PROMPTS event
...
Allows for context to be ordered by extensions
2023-12-19 19:11:36 +01:00
Cohee
67dd52c21b
#1309 Ollama text completion backend
2023-12-19 16:38:11 +02:00
Cohee
edd737e8bd
#371 Add llama.cpp inference server support
2023-12-18 22:38:28 +02:00
Cohee
b0d9f14534
Re-add Together as a text completion source
2023-12-17 23:38:03 +02:00
Cohee
180061337e
Merge branch 'staging' into anachronous/release
2023-12-17 21:35:49 +02:00
LenAnderson
fb25a90532
add GENERATION_STARTED event
2023-12-17 17:45:23 +00:00
anachronos
1e88c8922a
Merge branch 'staging' into release
2023-12-17 10:38:04 +01:00
Fayiron
9f2d32524c
Add TogetherAI as a chat completion source, basic
2023-12-16 14:39:30 +01:00
based
583f786d74
finish mistral frontend integration + apikey status check
2023-12-16 07:15:57 +10:00
Cohee
ef17702f6a
Merge branch 'staging' into bg-load-improvements
2023-12-15 17:02:10 +02:00
Cohee
6c16b94f9d
Merge pull request #1540 from valadaptive/refactor-device-check
...
Refactor mobile device check
2023-12-15 17:01:32 +02:00
valadaptive
0ee19d2ede
Set background client-side
2023-12-15 05:45:21 -05:00
valadaptive
7897206cf8
Add a pre-loading screen cover
...
This matches the loader color and exists to prevent a flash of unstyled
content when the page first loads and JS has not yet run.
2023-12-15 05:34:33 -05:00
valadaptive
fbdfa05f81
Replace usage of getDeviceInfo with isMobile
...
We were using getDeviceInfo to check whether we were on a desktop or a
mobile device. This can be done more simply with isMobile, which means
we can stop exporting getDeviceInfo.
2023-12-14 18:37:54 -05:00
valadaptive
769cc0a78f
Rename settings API endpoints
2023-12-14 16:47:03 -05:00
Cohee
cde9903fcb
Fix Bison models
2023-12-14 22:18:34 +02:00
based
ca87f29771
added streaming for google models
2023-12-14 21:03:41 +10:00
based
be396991de
finish implementing ui changes for google models
2023-12-14 11:53:26 +10:00
based
69e24c9686
change palm naming in UI
2023-12-14 11:14:41 +10:00
Cohee
0cd92f13b4
Merge branch 'staging' into separate-kobold-endpoints
2023-12-14 01:33:36 +02:00
Cohee
b957e3b875
Merge pull request #1518 from valadaptive/separate-ooba-endpoints
...
Move Ooba/textgenerationwebui endpoints into their own module
2023-12-14 01:27:05 +02:00
valadaptive
274605a07c
Rename Kobold-related endpoints
2023-12-12 16:42:12 -05:00
valadaptive
5b3c96df50
Rename /textgenerationwebui endpoint
...
I'd like to migrate over to using "textgen" to mean text-generation APIs
in general, so I've renamed the /textgenerationwebui/* endpoints to
/backends/text-completions/*.
2023-12-12 16:40:14 -05:00
valadaptive
7732865e4c
Another explanatory comment
2023-12-12 16:36:47 -05:00
valadaptive
87cbe361fc
Cache stopping strings rather than skipping them
2023-12-12 16:32:54 -05:00
Cohee
3d7706e6b3
#1524 Skip stop strings clean-up during streaming
2023-12-12 23:09:39 +02:00
Cohee
83f2c1a8ed
#1524 Add FPS limiter to streamed rendering
2023-12-12 22:11:23 +02:00
Cohee
9176f46caf
Add /preset command
2023-12-12 19:14:17 +02:00
Cohee
a9a05b17b9
Merge pull request #1517 from LenAnderson/firstIncludedMessageId
...
Add macro for first included message in context
2023-12-12 01:24:57 +02:00
Cohee
299749a4e7
Add prerequisites for websearch extension
2023-12-12 01:08:47 +02:00