Commit Graph

817 Commits

Author SHA1 Message Date
Cohee fc289126fa Add event type for text completion generation request settings ready 2024-02-24 21:45:33 +02:00
Cohee d5bf9fc28c Non-streaming logprobs for Aphrodite 2024-02-24 20:53:23 +02:00
Cohee d140b8d5be Parse non-streaming tabby logprobs 2024-02-24 20:10:53 +02:00
Cohee 3441667336 #1853 Add WI/Script link by entry automation id 2024-02-24 17:22:51 +02:00
Cohee 13aebc623a
Merge pull request #1854 from deciare/llamacpp-probs
Request and display token probabilities from llama.cpp backend
2024-02-24 15:06:28 +02:00
Cohee eaadfea639 Extend debounce duration of logprobs renderer 2024-02-24 15:03:57 +02:00
Cohee 9287ff18de Fix for non-streaming 2024-02-24 14:50:06 +02:00
NWilson f569424f3e Merge branch 'staging' into InfermaticAI 2024-02-22 08:32:10 -06:00
Cohee 711fd0517f Merge branch 'staging' into pygimport 2024-02-21 11:26:47 +02:00
Cohee 8e66a14e37 Add hints to doc strings about additional command prompts 2024-02-20 02:29:14 +02:00
Cohee 79ba026486
Merge pull request #1840 from Wolfsblvt/slash-commands-menu-actions-allow-custom-prompts
Extend impersonate/continue/regenerate with possible custom prompts (via slash commands and popup menu)
2024-02-20 02:26:41 +02:00
Cohee 061b7c6922 Don't try to execute script commands if the message doesn't start with slash 2024-02-20 02:09:01 +02:00
Wolfsblvt a5ee46cb2a Only respect slash command, ignore text field 2024-02-19 22:36:32 +01:00
Wolfsblvt 550d8483cc Extend impersonate/continue/regenerate with possible custom prompts
- Use custom prompt provided via slash command arguments (similar to /sysgen and others)
- Use written text from textbox, if the popout menu actions are clicked
2024-02-19 22:23:58 +01:00
NWilson 030806bf1e Merge remote-tracking branch 'origin/staging' into InfermaticAI 2024-02-19 10:14:06 -06:00
Cohee 3c2113a6e7 Add ability to preserve file names when loading from assets downloader 2024-02-19 00:17:23 +02:00
Cohee e4a48cd28f Add pyg hint to import UI 2024-02-17 03:54:13 +02:00
NWilson 8075e4cd1e Changes 2024-02-16 09:07:06 -06:00
NWilson b5887960b6 Merge branch 'release' into InfermaticAI 2024-02-16 08:53:04 -06:00
Cohee 9d713825c2 #1827 Consolidate {{group}} macro behavior 2024-02-12 16:23:01 +02:00
Cohee 72256110a7 Unbreak current chat rename 2024-02-12 02:55:16 +02:00
Cohee 354c52d997 #1814 Fix regex placement attribution 2024-02-11 16:52:14 +02:00
Cohee 03ad72b6c7
Merge pull request #1802 from Technologicat/modelname
{{model}} substitution to get name of current LLM
2024-02-08 11:36:02 +02:00
Juha Jeronen a49d0f1050 use getGeneratingModel 2024-02-08 11:13:54 +02:00
Cohee 04372848c8 Fix for undefined chats 2024-02-07 23:58:05 +02:00
Juha Jeronen 2dcb490e43 add {{model}} substitution macro to get name of current LLM
This is useful in the character card for an AI assistant, see #1774.

Tested with the Textgen backend, but should work with others too.

Horde will show only "Connected", and Novel will show the tier,
but Kobold and Textgen will show the model name.

If not connected, on any backend, will show "no_connection".
2024-02-07 23:29:32 +02:00
Juha Jeronen c1a5b50aae improve description for consistency 2024-02-07 23:12:21 +02:00
Juha Jeronen 5183fb40a2 refactor to improve proposed implementation of /getchatname 2024-02-07 23:09:51 +02:00
Juha Jeronen 5d1f3b13ea add /getchatname slash command to get name of current chat file
Example:

/getchatname | /echo {{pipe}}
2024-02-07 22:51:41 +02:00
Cohee 8ecab19966
Merge pull request #1798 from oobabooga/staging
Add API key field for text-generation-webui
2024-02-07 19:24:42 +02:00
Juha Jeronen f0cffb3dd9 fix /delchat for characters with lots of chat files 2024-02-07 15:20:37 +02:00
oobabooga 21fb143718 Add API key 2024-02-06 20:00:16 -08:00
Cohee 2815990589 Force personas sort before returning to caller 2024-02-05 10:58:35 +02:00
Cohee f12aeeed90 Firefox copium for persona images 2024-02-05 02:18:44 +02:00
Cohee 7ac6ed267f #1782 OpenAI multiswipe 2024-02-04 03:36:37 +02:00
Cohee 37d94a4331 #1775 Fix personas name sorting 2024-02-03 01:52:57 +02:00
Cohee 303fb09388 Show persona file name on hover 2024-02-02 04:09:12 +02:00
Cohee a746077a1e Sort personas by name. 2024-02-02 04:07:51 +02:00
Cohee 33c452df3e Fix navigation if new persona is on the first page 2024-01-31 11:23:57 +02:00
Cohee fa73c523f0 Persist current page on persona actions 2024-01-31 11:01:50 +02:00
Cohee f8032ac649 Default to 5 personas per page 2024-01-31 03:34:51 +02:00
Cohee c01217ac76 Clean-up styles and handlers 2024-01-30 20:16:48 +02:00
Cohee 4542c66664 #1761 Persona management overhaul 2024-01-30 19:12:56 +02:00
Cohee da7b435b7c
Merge pull request #1751 from kingbased/proxypreset
Reverse proxy presets
2024-01-29 22:09:33 +02:00
Cohee 5f1e290bda Disallow multiple {{original}} macro substitutions 2024-01-29 00:58:29 +02:00
Cohee a9464daffa Merge branch 'staging' into macro-separation 2024-01-29 00:51:06 +02:00
Cohee ef9cdf64cf Fix swipe buttons display when using /comment after last AI message 2024-01-29 00:37:51 +02:00
Cohee 8037e31c53 Fix {{original}} 2024-01-28 17:31:19 +02:00
valadaptive 44fb746783 Remove dead if statement 2024-01-27 15:25:44 -05:00
valadaptive 29f509179c Remove getMessageId
As far as I can tell, we don't add/remove anything from chat in between
the calculation of newMessageId and subsequent calls to getMessageId.
We can just use newMessageId everywhere.
2024-01-27 13:50:54 -05:00
valadaptive e475081116 Fix off-by-one in addOneMessage 2024-01-27 13:48:08 -05:00
valadaptive 7f955a59b9 Remove count_view_mes 2024-01-27 13:24:08 -05:00
valadaptive 4bd7364a8e Change macro substitution order 2024-01-27 13:22:22 -05:00
valadaptive 71f47588cd Pass macro variables in to evaluateMacros
This doesn't cover *all* the variables yet, just the ones that were
previously passed in as arguments. I'll expand this later to separate
the macro parsing from the execution of the functions themselves.
2024-01-27 13:20:44 -05:00
based aa976d0de2 implemented proxy preset manager 2024-01-27 06:21:00 +10:00
Cohee 1647e5ae49
Merge pull request #1734 from khanonnie/alternative-tokens
Implement Token Probabilities UI panel using logprobs
2024-01-26 03:39:25 +02:00
khanon 60044c18a4 Implement Token Probabilities UI using logprobs 2024-01-25 18:34:46 -06:00
lucy 1ef437f5f1
[feat] GENERATION_ENDED event
uses hideStopButton() to trigger the event, because all other paths are conditional and would require the event to be triggered in multiple functions, compared to hideStopButton() already being called at the end of all the generations

unblockGeneration() was another candidate, but it is not consistently executed to be viable
2024-01-26 00:53:27 +01:00
Cohee 6012ee5f89 #1740 Open most recent chat when deleting current chat file 2024-01-25 18:55:38 +02:00
Cohee 4abe87f103 #1742 Add /chat-manager command 2024-01-25 18:19:04 +02:00
Cohee 1ae5a8bd66 #1630 Fix display mode regex depth calc 2024-01-25 02:53:39 +02:00
Cohee 3f3529ef89 #1630 Add min/max depth for prompt/display regex scripts. 2024-01-24 22:48:58 +02:00
Cohee 4985afd816 Extend external media checks 2024-01-24 19:14:40 +02:00
Cohee 9f81ea3c1f Allow double quotes inside of <tags> 2024-01-24 18:05:11 +02:00
Cohee 4823bcf4ff Add option to forbid external images 2024-01-24 15:47:54 +02:00
NWilson f29f934c6b Progress 2024-01-24 06:59:27 -06:00
Cohee 9b42be2334 Reset message editor before switching active chat 2024-01-23 11:00:33 +02:00
Cohee 958cf6a373 Don't append name2 in non-instruct mode if continuing on first message 2024-01-21 23:20:29 +02:00
Cohee 3cd935c0d2 Fix possible prompt overflow on message examples push-out 2024-01-21 23:13:01 +02:00
Cohee 3cb9413541 #1718 Fix message search opening wrong chats 2024-01-20 20:13:41 +02:00
Cohee 4f55824d7f QR auto-execute on group member draft 2024-01-18 18:08:38 +02:00
Cohee b8445eb2cd Add slash commands for instruct and context 2024-01-18 17:24:07 +02:00
Cohee f966c398ef Increase preset command timeouts 2024-01-18 16:36:26 +02:00
maver e4d5eac6cf Add world info to generate_before_combine_prompts event data 2024-01-15 17:45:50 +01:00
Cohee ed77f4763a #1696 Don't cancel generation on first Escape press if editing a message 2024-01-15 03:45:31 +02:00
Cohee 6086cedf2b Use XHR to load HTML templates 2024-01-12 22:00:08 +02:00
Cohee 4fe13fab8e Customizable /gen instruct name 2024-01-12 19:16:42 +02:00
Cohee 4e5f01d785
Merge pull request #1668 from valadaptive/macro-cleanups-1
Move substituteParams into its own module
2024-01-12 11:57:21 +02:00
valadaptive 05003ccf78 Remove silly debug logging 2024-01-12 04:38:40 -05:00
valadaptive 89a999cfd4 Move macro substitution to new module
substituteParams has become a thin wrapper around the new evaluateMacros
function, and will become more of a compatibility shim as refactorings
and rewrites are done.
2024-01-10 22:22:30 -05:00
Cohee 3f6f32edad Add {{mesExamplesRaw}} macro for story string 2024-01-10 14:11:02 +02:00
Cohee adf82f2ba8 #1663 Add last prompt line to quiet prompts 2024-01-09 01:14:23 +02:00
Cohee f7b1b490c7 Larger alternate greetings window 2024-01-06 19:59:48 +02:00
Cohee 5f93c30a96 #1627 Bypass status check and custom model for textgen type 2024-01-05 19:15:07 +02:00
Cohee c69724e1da Fix GUI Kobold 2024-01-02 10:28:34 +02:00
Cohee 52637ccd39
Merge pull request #1619 from LenAnderson/worldinfo_updated-event
Add event when world info is updated
2024-01-01 18:35:23 +02:00
Cohee f53d937782 Fix mistral undefined name 2024-01-01 18:31:17 +02:00
Cohee 9106696f2f Render prompt manager when switching APIs 2024-01-01 17:06:10 +02:00
Cohee 908bf7a61d Merge branch 'staging' into generate-cleanups-3 2024-01-01 16:49:35 +02:00
LenAnderson 8cd75cf03d add event when world info is updated 2024-01-01 14:34:09 +00:00
Cohee 30732ada32 Lint fix 2024-01-01 16:08:24 +02:00
maver ee70593a7e Add world info to generate_before_combine_prompts event data 2023-12-28 17:03:36 +01:00
Cohee 8dd4543e93 Remove macro from user messages when using bias 2023-12-28 11:19:56 +02:00
valadaptive 77b02a8d4b Extract data.error check 2023-12-26 12:41:35 -05:00
valadaptive 0f8a16325b Extract dryRun early return from finishGenerating
This means we only have to handle it in one place rather than two.
2023-12-25 03:48:49 -05:00
valadaptive 3c0207f6cb Move "continue on send" logic out of Generate() 2023-12-25 03:48:49 -05:00
valadaptive 7899549754 Make "send message from chat box" into a function
Right now all it does is handle returning if there's already a message
being generated, but I'll extend it with more logic that I want to move
out of Generate().
2023-12-25 03:48:49 -05:00
valadaptive 1029ad90a2 Extract "not in a chat" check into guard clause
This lets us remove a layer of indentation, and reveal the error
handling logic that was previously hidden below a really long block of
code.
2023-12-25 03:48:49 -05:00
valadaptive 4fc2f15448 Reformat up Generate() group logic
The first two conditions in the group if/else blocks are the same, so we
can combine them.
2023-12-25 03:48:49 -05:00
valadaptive 0d3505c44b Remove OAI_BEFORE_CHATCOMPLETION
Not used in any internal code or extensions I can find.
2023-12-25 03:48:49 -05:00
valadaptive f53e051cbf Lift precondition check out of processCommands
Instead of passing type and dryRun into processCommands, do the check in
Generate, the only function that calls it. This makes the logic clearer.
2023-12-25 03:48:49 -05:00
Cohee a9e074dae1 Don't recreate first message if generation was run at least once 2023-12-24 02:47:00 +02:00
Cohee db3bf42d63 Fix Firefox number arrows not updating the slider 2023-12-23 16:09:03 +02:00
Cohee 09fd772a20 #1579 Add ooba character yaml import 2023-12-21 21:46:09 +02:00
Cohee 4621834c87 Short formatting path for empty messages 2023-12-21 20:50:30 +02:00
Cohee a85a6cf606 Allow displaying unreferenced macro in message texts 2023-12-21 20:49:03 +02:00
Cohee 39e0b0f5cb Remove custom Handlebars helpers for extensions. 2023-12-21 20:33:50 +02:00
valadaptive 8fb26284e2
Clean up Generate(), part 2 (#1578)
* Move StreamingProcessor constructor to the top

Typical code style is to declare the constructor at the top of the class
definition.

* Remove removePrefix

cleanupMessage does this already.

* Make message_already_generated local

We can pass it into StreamingProcessor so it doesn't have to be a global
variable.

* Consolidate setting isStopped and abort signal

Various places were doing some combination of setting isStopped, calling
abort on the streaming processor's abort controller, and calling
onStopStreaming. Let's consolidate all that functionality into
onStopStreaming/onErrorStreaming.

* More cleanly separate streaming/nonstreaming paths

* Replace promise with async function w/ handlers

By using onSuccess and onError as promise handlers, we can use normal
control flow and don't need to remember to use try/catch blocks or call
onSuccess every time.

* Remove runGenerate

Placing the rest of the code in a separate function doesn't really do
anything for its structure.

* Move StreamingProcessor() into streaming code path

* Fix return from circuit breaker

* Fix non-streaming chat completion request

* Fix Horde generation and quiet unblocking

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2023-12-21 20:20:28 +02:00
Cohee b3dfe16706 #1575 Fix clean-up WI depth injections 2023-12-21 16:33:21 +02:00
Cohee ee75adbd2d Update persona name if it is bound by user name input 2023-12-21 14:56:32 +02:00
Cohee cf8d7e7d35 Merge branch 'staging' into custom 2023-12-20 18:37:47 +02:00
Cohee ebec26154c Welcome message fixed 2023-12-20 18:37:34 +02:00
Cohee 5734dbd17c Add custom endpoint type 2023-12-20 18:29:03 +02:00
Cohee 041b9d4b01 Add style sanitizer to message renderer 2023-12-20 17:03:37 +02:00
Cohee b0a4341571
Merge pull request #1574 from artisticMink/feature/before-combine-event
Allow extensions to alter the context order.
2023-12-20 15:46:34 +02:00
maver f30f75b310 Add GENERATE_BEFORE_COMBINE_PROMPTS event
Allows for context to be ordered by extensions
2023-12-19 19:11:36 +01:00
Cohee 67dd52c21b #1309 Ollama text completion backend 2023-12-19 16:38:11 +02:00
Cohee edd737e8bd #371 Add llama.cpp inference server support 2023-12-18 22:38:28 +02:00
Cohee b0d9f14534 Re-add Together as a text completion source 2023-12-17 23:38:03 +02:00
Cohee 180061337e Merge branch 'staging' into anachronous/release 2023-12-17 21:35:49 +02:00
LenAnderson fb25a90532 add GENERATION_STARTED event 2023-12-17 17:45:23 +00:00
anachronos 1e88c8922a
Merge branch 'staging' into release 2023-12-17 10:38:04 +01:00
Fayiron 9f2d32524c Add TogetherAI as a chat completion source, basic 2023-12-16 14:39:30 +01:00
based 583f786d74 finish mistral frontend integration + apikey status check 2023-12-16 07:15:57 +10:00
Cohee ef17702f6a Merge branch 'staging' into bg-load-improvements 2023-12-15 17:02:10 +02:00
Cohee 6c16b94f9d
Merge pull request #1540 from valadaptive/refactor-device-check
Refactor mobile device check
2023-12-15 17:01:32 +02:00
valadaptive 0ee19d2ede Set background client-side 2023-12-15 05:45:21 -05:00
valadaptive 7897206cf8 Add a pre-loading screen cover
This matches the loader color and exists to prevent a flash of unstyled
content when the page first loads and JS has not yet run.
2023-12-15 05:34:33 -05:00
valadaptive fbdfa05f81 Replace usage of getDeviceInfo with isMobile
We were using getDeviceInfo to check whether we were on a desktop or a
mobile device. This can be done more simply with isMobile, which means
we can stop exporting getDeviceInfo.
2023-12-14 18:37:54 -05:00
valadaptive 769cc0a78f Rename settings API endpoints 2023-12-14 16:47:03 -05:00
Cohee cde9903fcb Fix Bison models 2023-12-14 22:18:34 +02:00
based ca87f29771 added streaming for google models 2023-12-14 21:03:41 +10:00
based be396991de finish implementing ui changes for google models 2023-12-14 11:53:26 +10:00
based 69e24c9686 change palm naming in UI 2023-12-14 11:14:41 +10:00
Cohee 0cd92f13b4 Merge branch 'staging' into separate-kobold-endpoints 2023-12-14 01:33:36 +02:00
Cohee b957e3b875
Merge pull request #1518 from valadaptive/separate-ooba-endpoints
Move Ooba/textgenerationwebui endpoints into their own module
2023-12-14 01:27:05 +02:00
valadaptive 274605a07c Rename Kobold-related endpoints 2023-12-12 16:42:12 -05:00
valadaptive 5b3c96df50 Rename /textgenerationwebui endpoint
I'd like to migrate over to using "textgen" to mean text-generation APIs
in general, so I've renamed the /textgenerationwebui/* endpoints to
/backends/text-completions/*.
2023-12-12 16:40:14 -05:00
valadaptive 7732865e4c Another explanatory comment 2023-12-12 16:36:47 -05:00
valadaptive 87cbe361fc Cache stopping strings rather than skipping them 2023-12-12 16:32:54 -05:00
Cohee 3d7706e6b3 #1524 Skip stop strings clean-up during streaming 2023-12-12 23:09:39 +02:00
Cohee 83f2c1a8ed #1524 Add FPS limiter to streamed rendering 2023-12-12 22:11:23 +02:00
Cohee 9176f46caf Add /preset command 2023-12-12 19:14:17 +02:00
Cohee a9a05b17b9
Merge pull request #1517 from LenAnderson/firstIncludedMessageId
Add macro for first included message in context
2023-12-12 01:24:57 +02:00
Cohee 299749a4e7 Add prerequisites for websearch extension 2023-12-12 01:08:47 +02:00
LenAnderson 2bdd3672d4 add macro for first included message in context 2023-12-11 23:06:21 +00:00
Cohee 1b11ddc26a Add vector storage to WI scanning 2023-12-11 22:47:26 +02:00
Cohee afe3e824b1 Unblock left swipe on swipeId overflow. 2023-12-11 21:16:09 +02:00
Cohee e713021737
Merge pull request #1511 from valadaptive/more-kobold-cleanups
More Kobold cleanups
2023-12-11 20:59:49 +02:00
Cohee 05ab147209 Fix swipes getting stuck when no Horde models selected 2023-12-11 20:46:34 +02:00