Commit Graph

846 Commits

Author SHA1 Message Date
Cohee 354c52d997 #1814 Fix regex placement attribution 2024-02-11 16:52:14 +02:00
Cohee 03ad72b6c7
Merge pull request #1802 from Technologicat/modelname
{{model}} substitution to get name of current LLM
2024-02-08 11:36:02 +02:00
Juha Jeronen a49d0f1050 use getGeneratingModel 2024-02-08 11:13:54 +02:00
Cohee 04372848c8 Fix for undefined chats 2024-02-07 23:58:05 +02:00
Juha Jeronen 2dcb490e43 add {{model}} substitution macro to get name of current LLM
This is useful in the character card for an AI assistant, see #1774.

Tested with the Textgen backend, but should work with others too.

Horde will show only "Connected", and Novel will show the tier,
but Kobold and Textgen will show the model name.

If not connected, on any backend, will show "no_connection".
2024-02-07 23:29:32 +02:00
Juha Jeronen c1a5b50aae improve description for consistency 2024-02-07 23:12:21 +02:00
Juha Jeronen 5183fb40a2 refactor to improve proposed implementation of /getchatname 2024-02-07 23:09:51 +02:00
Juha Jeronen 5d1f3b13ea add /getchatname slash command to get name of current chat file
Example:

/getchatname | /echo {{pipe}}
2024-02-07 22:51:41 +02:00
Cohee 8ecab19966
Merge pull request #1798 from oobabooga/staging
Add API key field for text-generation-webui
2024-02-07 19:24:42 +02:00
Juha Jeronen f0cffb3dd9 fix /delchat for characters with lots of chat files 2024-02-07 15:20:37 +02:00
oobabooga 21fb143718 Add API key 2024-02-06 20:00:16 -08:00
Cohee 2815990589 Force personas sort before returning to caller 2024-02-05 10:58:35 +02:00
Cohee f12aeeed90 Firefox copium for persona images 2024-02-05 02:18:44 +02:00
Cohee 7ac6ed267f #1782 OpenAI multiswipe 2024-02-04 03:36:37 +02:00
Cohee 37d94a4331 #1775 Fix personas name sorting 2024-02-03 01:52:57 +02:00
Cohee 303fb09388 Show persona file name on hover 2024-02-02 04:09:12 +02:00
Cohee a746077a1e Sort personas by name. 2024-02-02 04:07:51 +02:00
Cohee 33c452df3e Fix navigation if new persona is on the first page 2024-01-31 11:23:57 +02:00
Cohee fa73c523f0 Persist current page on persona actions 2024-01-31 11:01:50 +02:00
Cohee f8032ac649 Default to 5 personas per page 2024-01-31 03:34:51 +02:00
Cohee c01217ac76 Clean-up styles and handlers 2024-01-30 20:16:48 +02:00
Cohee 4542c66664 #1761 Persona management overhaul 2024-01-30 19:12:56 +02:00
Cohee da7b435b7c
Merge pull request #1751 from kingbased/proxypreset
Reverse proxy presets
2024-01-29 22:09:33 +02:00
Cohee 5f1e290bda Disallow multiple {{original}} macro substitutions 2024-01-29 00:58:29 +02:00
Cohee a9464daffa Merge branch 'staging' into macro-separation 2024-01-29 00:51:06 +02:00
Cohee ef9cdf64cf Fix swipe buttons display when using /comment after last AI message 2024-01-29 00:37:51 +02:00
Cohee 8037e31c53 Fix {{original}} 2024-01-28 17:31:19 +02:00
valadaptive 44fb746783 Remove dead if statement 2024-01-27 15:25:44 -05:00
valadaptive 29f509179c Remove getMessageId
As far as I can tell, we don't add/remove anything from chat in between
the calculation of newMessageId and subsequent calls to getMessageId.
We can just use newMessageId everywhere.
2024-01-27 13:50:54 -05:00
valadaptive e475081116 Fix off-by-one in addOneMessage 2024-01-27 13:48:08 -05:00
valadaptive 7f955a59b9 Remove count_view_mes 2024-01-27 13:24:08 -05:00
valadaptive 4bd7364a8e Change macro substitution order 2024-01-27 13:22:22 -05:00
valadaptive 71f47588cd Pass macro variables in to evaluateMacros
This doesn't cover *all* the variables yet, just the ones that were
previously passed in as arguments. I'll expand this later to separate
the macro parsing from the execution of the functions themselves.
2024-01-27 13:20:44 -05:00
based aa976d0de2 implemented proxy preset manager 2024-01-27 06:21:00 +10:00
Cohee 1647e5ae49
Merge pull request #1734 from khanonnie/alternative-tokens
Implement Token Probabilities UI panel using logprobs
2024-01-26 03:39:25 +02:00
khanon 60044c18a4 Implement Token Probabilities UI using logprobs 2024-01-25 18:34:46 -06:00
lucy 1ef437f5f1
[feat] GENERATION_ENDED event
uses hideStopButton() to trigger the event, because all other paths are conditional and would require the event to be triggered in multiple functions, compared to hideStopButton() already being called at the end of all the generations

unblockGeneration() was another candidate, but it is not consistently executed to be viable
2024-01-26 00:53:27 +01:00
Cohee 6012ee5f89 #1740 Open most recent chat when deleting current chat file 2024-01-25 18:55:38 +02:00
Cohee 4abe87f103 #1742 Add /chat-manager command 2024-01-25 18:19:04 +02:00
Cohee 1ae5a8bd66 #1630 Fix display mode regex depth calc 2024-01-25 02:53:39 +02:00
Cohee 3f3529ef89 #1630 Add min/max depth for prompt/display regex scripts. 2024-01-24 22:48:58 +02:00
Cohee 4985afd816 Extend external media checks 2024-01-24 19:14:40 +02:00
Cohee 9f81ea3c1f Allow double quotes inside of <tags> 2024-01-24 18:05:11 +02:00
Cohee 4823bcf4ff Add option to forbid external images 2024-01-24 15:47:54 +02:00
NWilson f29f934c6b Progress 2024-01-24 06:59:27 -06:00
Cohee 9b42be2334 Reset message editor before switching active chat 2024-01-23 11:00:33 +02:00
Cohee 958cf6a373 Don't append name2 in non-instruct mode if continuing on first message 2024-01-21 23:20:29 +02:00
Cohee 3cd935c0d2 Fix possible prompt overflow on message examples push-out 2024-01-21 23:13:01 +02:00
Cohee 3cb9413541 #1718 Fix message search opening wrong chats 2024-01-20 20:13:41 +02:00
Cohee 4f55824d7f QR auto-execute on group member draft 2024-01-18 18:08:38 +02:00
Cohee b8445eb2cd Add slash commands for instruct and context 2024-01-18 17:24:07 +02:00
Cohee f966c398ef Increase preset command timeouts 2024-01-18 16:36:26 +02:00
maver e4d5eac6cf Add world info to generate_before_combine_prompts event data 2024-01-15 17:45:50 +01:00
Cohee ed77f4763a #1696 Don't cancel generation on first Escape press if editing a message 2024-01-15 03:45:31 +02:00
Cohee 6086cedf2b Use XHR to load HTML templates 2024-01-12 22:00:08 +02:00
Cohee 4fe13fab8e Customizable /gen instruct name 2024-01-12 19:16:42 +02:00
Cohee 4e5f01d785
Merge pull request #1668 from valadaptive/macro-cleanups-1
Move substituteParams into its own module
2024-01-12 11:57:21 +02:00
valadaptive 05003ccf78 Remove silly debug logging 2024-01-12 04:38:40 -05:00
valadaptive 89a999cfd4 Move macro substitution to new module
substituteParams has become a thin wrapper around the new evaluateMacros
function, and will become more of a compatibility shim as refactorings
and rewrites are done.
2024-01-10 22:22:30 -05:00
Cohee 3f6f32edad Add {{mesExamplesRaw}} macro for story string 2024-01-10 14:11:02 +02:00
Cohee adf82f2ba8 #1663 Add last prompt line to quiet prompts 2024-01-09 01:14:23 +02:00
Cohee f7b1b490c7 Larger alternate greetings window 2024-01-06 19:59:48 +02:00
Cohee 5f93c30a96 #1627 Bypass status check and custom model for textgen type 2024-01-05 19:15:07 +02:00
Cohee c69724e1da Fix GUI Kobold 2024-01-02 10:28:34 +02:00
Cohee 52637ccd39
Merge pull request #1619 from LenAnderson/worldinfo_updated-event
Add event when world info is updated
2024-01-01 18:35:23 +02:00
Cohee f53d937782 Fix mistral undefined name 2024-01-01 18:31:17 +02:00
Cohee 9106696f2f Render prompt manager when switching APIs 2024-01-01 17:06:10 +02:00
Cohee 908bf7a61d Merge branch 'staging' into generate-cleanups-3 2024-01-01 16:49:35 +02:00
LenAnderson 8cd75cf03d add event when world info is updated 2024-01-01 14:34:09 +00:00
Cohee 30732ada32 Lint fix 2024-01-01 16:08:24 +02:00
maver ee70593a7e Add world info to generate_before_combine_prompts event data 2023-12-28 17:03:36 +01:00
Cohee 8dd4543e93 Remove macro from user messages when using bias 2023-12-28 11:19:56 +02:00
valadaptive 77b02a8d4b Extract data.error check 2023-12-26 12:41:35 -05:00
valadaptive 0f8a16325b Extract dryRun early return from finishGenerating
This means we only have to handle it in one place rather than two.
2023-12-25 03:48:49 -05:00
valadaptive 3c0207f6cb Move "continue on send" logic out of Generate() 2023-12-25 03:48:49 -05:00
valadaptive 7899549754 Make "send message from chat box" into a function
Right now all it does is handle returning if there's already a message
being generated, but I'll extend it with more logic that I want to move
out of Generate().
2023-12-25 03:48:49 -05:00
valadaptive 1029ad90a2 Extract "not in a chat" check into guard clause
This lets us remove a layer of indentation, and reveal the error
handling logic that was previously hidden below a really long block of
code.
2023-12-25 03:48:49 -05:00
valadaptive 4fc2f15448 Reformat up Generate() group logic
The first two conditions in the group if/else blocks are the same, so we
can combine them.
2023-12-25 03:48:49 -05:00
valadaptive 0d3505c44b Remove OAI_BEFORE_CHATCOMPLETION
Not used in any internal code or extensions I can find.
2023-12-25 03:48:49 -05:00
valadaptive f53e051cbf Lift precondition check out of processCommands
Instead of passing type and dryRun into processCommands, do the check in
Generate, the only function that calls it. This makes the logic clearer.
2023-12-25 03:48:49 -05:00
Cohee a9e074dae1 Don't recreate first message if generation was run at least once 2023-12-24 02:47:00 +02:00
Cohee db3bf42d63 Fix Firefox number arrows not updating the slider 2023-12-23 16:09:03 +02:00
Cohee 09fd772a20 #1579 Add ooba character yaml import 2023-12-21 21:46:09 +02:00
Cohee 4621834c87 Short formatting path for empty messages 2023-12-21 20:50:30 +02:00
Cohee a85a6cf606 Allow displaying unreferenced macro in message texts 2023-12-21 20:49:03 +02:00
Cohee 39e0b0f5cb Remove custom Handlebars helpers for extensions. 2023-12-21 20:33:50 +02:00
valadaptive 8fb26284e2
Clean up Generate(), part 2 (#1578)
* Move StreamingProcessor constructor to the top

Typical code style is to declare the constructor at the top of the class
definition.

* Remove removePrefix

cleanupMessage does this already.

* Make message_already_generated local

We can pass it into StreamingProcessor so it doesn't have to be a global
variable.

* Consolidate setting isStopped and abort signal

Various places were doing some combination of setting isStopped, calling
abort on the streaming processor's abort controller, and calling
onStopStreaming. Let's consolidate all that functionality into
onStopStreaming/onErrorStreaming.

* More cleanly separate streaming/nonstreaming paths

* Replace promise with async function w/ handlers

By using onSuccess and onError as promise handlers, we can use normal
control flow and don't need to remember to use try/catch blocks or call
onSuccess every time.

* Remove runGenerate

Placing the rest of the code in a separate function doesn't really do
anything for its structure.

* Move StreamingProcessor() into streaming code path

* Fix return from circuit breaker

* Fix non-streaming chat completion request

* Fix Horde generation and quiet unblocking

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2023-12-21 20:20:28 +02:00
Cohee b3dfe16706 #1575 Fix clean-up WI depth injections 2023-12-21 16:33:21 +02:00
Cohee ee75adbd2d Update persona name if it is bound by user name input 2023-12-21 14:56:32 +02:00
Cohee cf8d7e7d35 Merge branch 'staging' into custom 2023-12-20 18:37:47 +02:00
Cohee ebec26154c Welcome message fixed 2023-12-20 18:37:34 +02:00
Cohee 5734dbd17c Add custom endpoint type 2023-12-20 18:29:03 +02:00
Cohee 041b9d4b01 Add style sanitizer to message renderer 2023-12-20 17:03:37 +02:00
Cohee b0a4341571
Merge pull request #1574 from artisticMink/feature/before-combine-event
Allow extensions to alter the context order.
2023-12-20 15:46:34 +02:00
maver f30f75b310 Add GENERATE_BEFORE_COMBINE_PROMPTS event
Allows for context to be ordered by extensions
2023-12-19 19:11:36 +01:00
Cohee 67dd52c21b #1309 Ollama text completion backend 2023-12-19 16:38:11 +02:00
Cohee edd737e8bd #371 Add llama.cpp inference server support 2023-12-18 22:38:28 +02:00
Cohee b0d9f14534 Re-add Together as a text completion source 2023-12-17 23:38:03 +02:00
Cohee 180061337e Merge branch 'staging' into anachronous/release 2023-12-17 21:35:49 +02:00
LenAnderson fb25a90532 add GENERATION_STARTED event 2023-12-17 17:45:23 +00:00
anachronos 1e88c8922a
Merge branch 'staging' into release 2023-12-17 10:38:04 +01:00
Fayiron 9f2d32524c Add TogetherAI as a chat completion source, basic 2023-12-16 14:39:30 +01:00
based 583f786d74 finish mistral frontend integration + apikey status check 2023-12-16 07:15:57 +10:00
Cohee ef17702f6a Merge branch 'staging' into bg-load-improvements 2023-12-15 17:02:10 +02:00
Cohee 6c16b94f9d
Merge pull request #1540 from valadaptive/refactor-device-check
Refactor mobile device check
2023-12-15 17:01:32 +02:00
valadaptive 0ee19d2ede Set background client-side 2023-12-15 05:45:21 -05:00
valadaptive 7897206cf8 Add a pre-loading screen cover
This matches the loader color and exists to prevent a flash of unstyled
content when the page first loads and JS has not yet run.
2023-12-15 05:34:33 -05:00
valadaptive fbdfa05f81 Replace usage of getDeviceInfo with isMobile
We were using getDeviceInfo to check whether we were on a desktop or a
mobile device. This can be done more simply with isMobile, which means
we can stop exporting getDeviceInfo.
2023-12-14 18:37:54 -05:00
valadaptive 769cc0a78f Rename settings API endpoints 2023-12-14 16:47:03 -05:00
Cohee cde9903fcb Fix Bison models 2023-12-14 22:18:34 +02:00
based ca87f29771 added streaming for google models 2023-12-14 21:03:41 +10:00
based be396991de finish implementing ui changes for google models 2023-12-14 11:53:26 +10:00
based 69e24c9686 change palm naming in UI 2023-12-14 11:14:41 +10:00
Cohee 0cd92f13b4 Merge branch 'staging' into separate-kobold-endpoints 2023-12-14 01:33:36 +02:00
Cohee b957e3b875
Merge pull request #1518 from valadaptive/separate-ooba-endpoints
Move Ooba/textgenerationwebui endpoints into their own module
2023-12-14 01:27:05 +02:00
valadaptive 274605a07c Rename Kobold-related endpoints 2023-12-12 16:42:12 -05:00
valadaptive 5b3c96df50 Rename /textgenerationwebui endpoint
I'd like to migrate over to using "textgen" to mean text-generation APIs
in general, so I've renamed the /textgenerationwebui/* endpoints to
/backends/text-completions/*.
2023-12-12 16:40:14 -05:00
valadaptive 7732865e4c Another explanatory comment 2023-12-12 16:36:47 -05:00
valadaptive 87cbe361fc Cache stopping strings rather than skipping them 2023-12-12 16:32:54 -05:00
Cohee 3d7706e6b3 #1524 Skip stop strings clean-up during streaming 2023-12-12 23:09:39 +02:00
Cohee 83f2c1a8ed #1524 Add FPS limiter to streamed rendering 2023-12-12 22:11:23 +02:00
Cohee 9176f46caf Add /preset command 2023-12-12 19:14:17 +02:00
Cohee a9a05b17b9
Merge pull request #1517 from LenAnderson/firstIncludedMessageId
Add macro for first included message in context
2023-12-12 01:24:57 +02:00
Cohee 299749a4e7 Add prerequisites for websearch extension 2023-12-12 01:08:47 +02:00
LenAnderson 2bdd3672d4 add macro for first included message in context 2023-12-11 23:06:21 +00:00
Cohee 1b11ddc26a Add vector storage to WI scanning 2023-12-11 22:47:26 +02:00
Cohee afe3e824b1 Unblock left swipe on swipeId overflow. 2023-12-11 21:16:09 +02:00
Cohee e713021737
Merge pull request #1511 from valadaptive/more-kobold-cleanups
More Kobold cleanups
2023-12-11 20:59:49 +02:00
Cohee 05ab147209 Fix swipes getting stuck when no Horde models selected 2023-12-11 20:46:34 +02:00
Cohee 7482a75bbd
Merge pull request #1493 from valadaptive/generate-cleanups
Clean up Generate(), part 1
2023-12-11 20:21:32 +02:00
Cohee 0302686a96 Return from Generate if calling circuit breaker 2023-12-11 19:07:33 +02:00
Cohee c48e447c42 Add rows and button text to import window 2023-12-11 16:23:47 +02:00
valadaptive d33cb0d8d1 Clarify getstatus API
Instead of "version" and "koboldVersion", have "koboldUnitedVersion" and
"koboldCppVersion", the latter of which is null if we're not connected
to KoboldCpp.
2023-12-10 20:34:11 -05:00
valadaptive 1fbf4394c8 Separate Kobold Horde status function 2023-12-10 20:16:07 -05:00
valadaptive 3ab1962b84 Improve circuit breaker
We now track the loop counter as a parameter of Generate that we
decrement with every recursive call, rather than a global variable,
and it *should* now work with quiet prompt generation.
2023-12-10 18:46:28 -05:00
valadaptive 3d7c901464 Remove looping backoff behavior 2023-12-10 18:35:46 -05:00
valadaptive 315d981804 Reject generation on circuit breaker error 2023-12-10 18:13:34 -05:00
valadaptive ae9445e500 Reject on data.error 2023-12-10 13:56:31 -05:00
valadaptive 5fd466b53f Fix generateQuietPrompt 2023-12-10 13:54:39 -05:00
Cohee 420d186823 Add reduced motion toggle 2023-12-10 20:02:25 +02:00
valadaptive 33f969f097 Have Generate() return a promise
Generate(), being async, now returns a promise-within-a-promise.
If called with `let p = await Generate(...)`, it'll wait for generation
to *start*. If you then `await p`, you'll wait for generation to
*finish*. This makes it much easier to tell exactly when generation's
done. generateGroupWrapper has been similarly modified.
2023-12-10 12:30:10 -05:00
valadaptive 03884b29ad Always call resolve in Generate()
This lets us get rid of the janky hack in group-chats to tell when a
message is done generating.
2023-12-10 12:26:30 -05:00
Cohee dbd52a7994
Merge pull request #1482 from valadaptive/sse-stream
Refactor server-sent events parsing
2023-12-10 18:32:19 +02:00
Cohee d5140142fb Merge branch 'staging' into tokenizers-cleanup 2023-12-10 15:51:15 +02:00
Cohee e0d0e1dd66
Merge pull request #1502 from valadaptive/status-cleanup
Clean up getStatus code
2023-12-10 15:49:37 +02:00
Cohee 6be1c6ff10
Merge pull request #1504 from valadaptive/store-compiled-templates
Cache compiled Handlebars templates
2023-12-10 15:32:52 +02:00
Cohee 5f1683f43a More input padding and stricter sanitation 2023-12-10 15:07:39 +02:00
valadaptive c48bc8a76e Cache compiled Handlebars templates
Since we already have a template cache, it makes sense to store the
templates in it *after* compiling them, to avoid the overhead of
re-compiling them every time we call renderTemplate.

I've also changed the cache from an object to a Map--it's more
semantically correct, and avoids weird edge cases like a template named
"hasOwnProperty" or some other function that exists as an object
property.
2023-12-09 21:29:36 -05:00
valadaptive 499d158c11 Remove last usage of getAPIServerUrl
Now that we're not using this in the tokenizers code, we can remove it.
2023-12-09 20:55:34 -05:00
valadaptive babb127aee Move NovelAI status functions over to the rest
Have all the get(...)Status and event handler registrations in the same
areas, rather than having the NovelAI ones far away. I want to
eventually move all the API-specific stuff into separate modules, but
this will make things cleaner for the time being.
2023-12-09 18:41:51 -05:00
valadaptive 0ea0399ed1 Separate getStatus into Kobold/textgen versions
This adds a bit of duplicate code for the time being, but ultimately
makes the code less confusing because we only need to include the bits
that are relevant to the specific API in each function. We can also
remove API parameters that are useless depending on the endpoint.
2023-12-09 18:39:19 -05:00
Cohee 04c83eae71 Use null coalescing operator 2023-12-09 16:07:55 +02:00
artisticMink 4692450975 Enable getPastCharacterChats to work with specific character ids 2023-12-09 14:36:15 +01:00
artisticMink ba3966e148 Only refresh character list after all deletions have been processed. 2023-12-09 14:31:18 +01:00
valadaptive 3cfc32c16d Refactor error handling
Remove the StreamingProcessor.hook method and use a try-catch block to
await the generator promise and set the generator, handling errors with
onError if it fails.
2023-12-08 18:40:17 -05:00
Cohee b0e7b73a32 Fix streaming processor error handler hooks 2023-12-08 02:01:08 +02:00
Cohee 990f958f4f #1484 Consolidate chat name template 2023-12-07 16:59:53 +02:00
Cohee 2417ae9d87 #1484 Display version on close chat 2023-12-07 16:57:47 +02:00
Cohee 9b7a0f3d35 Hide loader before displaying blocking error message 2023-12-07 12:29:12 +02:00
Cohee bd1f09c644 Add loader for chat renaming 2023-12-07 12:27:18 +02:00
Cohee 698890ae0f Fix /delchat slash command 2023-12-07 12:20:33 +02:00
valadaptive 6efe95f4f1 Rename chat API endpoints 2023-12-06 19:58:24 -05:00
Cohee 55d7bd6a87 Return last evaluation of random 2023-12-07 02:45:35 +02:00
Cohee f575e0d61d Add {{currentSwipeId}} / {{lastSwipeId}} macros 2023-12-07 02:35:24 +02:00
Cohee b58f14d1d2 Fix bulk menu not working 2023-12-06 00:55:42 +02:00
Cohee 7f703704c7 Display loader on loading past chats 2023-12-06 00:42:41 +02:00
valadaptive b689b8bd30 Rename character API endpoints
Precursor to moving the character API into its own module
2023-12-04 17:35:06 -05:00
Cohee ddd16c1469
Merge pull request #1452 from valadaptive/assets-router
Use Express router for assets + "files" endpoints
2023-12-04 21:29:52 +02:00
Cohee 1ac494d468 Don't attempt to send files on dry runs. 2023-12-04 21:28:36 +02:00
Cohee 3ad7d5d520 Negotiate formatting with VS Code autoformat 2023-12-04 20:59:11 +02:00
valadaptive 5f1bed1e70 Enable object-curly-spacing lint 2023-12-04 12:32:41 -05:00
valadaptive 3c59b5b7a5 Fix holdover textgenerationwebui_settings 2023-12-03 17:36:25 -05:00
valadaptive 9c33ddbafc Make textgen settings type checks more concise 2023-12-03 14:56:01 -05:00
valadaptive 047c897ead Remove is[API] functions
Just use an equality comparison. It's a bit longer, but only because
"textgenerationwebui_settings" is a long identifier.
2023-12-03 14:56:01 -05:00
valadaptive ba54e3dea0 Replaces is_[api] params with api_type param
These were 5 mutually-exclusive booleans, which can be replaced with one
param that takes on 5 values, one for each API type.
2023-12-03 14:56:01 -05:00
Cohee 939e938ba2 Disallow multiswipe for quiet gens 2023-12-03 20:56:25 +02:00
Cohee 1786b0d340 #1403 Add Aphrodite multi-swipe 2023-12-03 20:40:09 +02:00
Cohee 676cc7731e #1436 Add unlock to response length 2023-12-03 18:30:21 +02:00
Cohee 91811f63b5 lint: Fix JSdocs 2023-12-03 14:23:20 +02:00
Cohee 4cb9cd128f Rename bookmarks to checkpoints 2023-12-03 03:11:14 +02:00
Cohee 2c949b672a Fix bulk edit and message context action styles 2023-12-03 02:17:02 +02:00
Cohee c9ab85d8c9 Add /forcesave command 2023-12-03 00:53:45 +02:00
LenAnderson 1eb32b247e add close stop for cancel button 2023-12-02 21:45:08 +00:00
LenAnderson c10e298777 fix old popup closing next popup 2023-12-02 21:22:58 +00:00
Cohee ff46a249d8 Add {{maxPrompt}} macro 2023-12-02 22:47:43 +02:00
Cohee 6e09e45651 Fix /trigger and /continue auto-execution 2023-12-02 22:34:46 +02:00
Cohee 64a3564892 lint: Comma dangle 2023-12-02 22:06:57 +02:00
Cohee 08fedf3a96 lint: Use 4 space indent 2023-12-02 21:56:16 +02:00
Cohee c63cd87cc0 lint: Require semicolons 2023-12-02 21:11:06 +02:00
Cohee 9faa1e34b0 Merge branch 'staging' into singlequote 2023-12-02 20:43:41 +02:00
Cohee a28c23d295 Wait for generation unlock before running continue or trigger 2023-12-02 20:12:36 +02:00
valadaptive a37f874e38 Require single quotes 2023-12-02 13:04:51 -05:00
valadaptive 518bb58d5a Enable no-unused-vars lint
This is the big one. Probably needs thorough review to make sure I
didn't accidentally remove any setInterval or fetch calls.
2023-12-02 12:11:19 -05:00
valadaptive 39bbef376f Enable no-undef lint
I'm not sure where run_edit is supposed to go or if any logic is
missing. I just made my best guess.
2023-12-02 12:11:19 -05:00
valadaptive 66f704bdda Refactor prompt itemization to not redeclare vars 2023-12-02 12:11:19 -05:00
valadaptive 45ad0683d9 Remove characterName silliness
If mes.name is name1, we set it to name1. Otherwise, we set it to
mes.name. It's always mes.name.
2023-12-02 12:10:31 -05:00
valadaptive b023312117 Enable no-useless-escape lint 2023-12-02 10:32:26 -05:00
valadaptive 0a27275772 Enable no-extra-semi lint 2023-12-02 10:32:26 -05:00
valadaptive 27e63a7a77 Enable no-case-declarations lint 2023-12-02 10:32:26 -05:00
Cohee 6b348f6128 Fix trailing stopping strings removal 2023-12-01 18:55:11 +02:00