Commit Graph

56 Commits

Author SHA1 Message Date
valadaptive 5569a63595 Remove legacy_streaming setting
This was a workaround for older versions of Slaude that implemented SSE
improperly. This was fixed in Slaude 7 months ago, so the workaround can
be removed.
2023-12-07 18:00:36 -05:00
Cohee bca43b11fa Enable match whole words by default 2023-12-06 16:53:48 +02:00
Cohee 45df576f1c Re-add default presets for content manager 2023-12-03 15:07:21 +02:00
Cohee 478330542d Default to non-listen for new installs 2023-12-03 00:54:28 +02:00
Cohee 04ef9fba54 Disable context stop strings on pull but enable for new installs 2023-12-02 02:19:32 +02:00
Cohee 726bb2e041 #1405 Add formality config for deepl 2023-12-01 14:12:56 +02:00
Cohee 1ce009b84e [FEATURE_REQUEST] config.yaml basicAuthUser Default Setting Recommendation #1415 2023-11-29 14:05:19 +02:00
Cohee e541c2b186 #1412 Add randomized user ids to OpenAI 2023-11-29 00:11:10 +02:00
Cohee a7024a1d34 Migrate to config.yaml 2023-11-25 23:45:33 +02:00
Cohee b24d4f2340 Add opt-in CORS bypass endpoint 2023-11-25 21:56:57 +02:00
Cohee 52d9855916 Code lint 2023-11-21 02:00:50 +02:00
LenAnderson 46cc04c798 add default comfy workflow 2023-11-20 15:59:38 +00:00
Cohee 28bb5db04f Add new settings to default/settings.json. 2023-11-11 16:21:20 +02:00
Cohee 2c7b954a8d #1328 New API schema for ooba / mancer / aphrodite 2023-11-08 00:17:13 +02:00
Cohee 9396ca585c #1287 Add user.css file 2023-10-28 12:48:42 +03:00
Cohee 52ecad1cdf Rework single-line mode, add section for Context Formatting settings 2023-10-27 21:02:03 +03:00
Cohee 5dbe2ebf29 Add chat file backups 2023-10-24 22:09:55 +03:00
Cohee c4e6b565a5 Add SD prompt expansion 2023-10-20 15:03:26 +03:00
SDS 5848ec498b
Assorted fixes and improvements (#1208)
* Kobold Presets fixed

* Help texts fixed

* Scale API for connectAPISlash

* Quick Reply checkboxes fixed

* New Instruct Mode Presets

* More date/time macros

* ChatML context template and instruct prompt format

* Mistral context template and instruct prompt format

* Removed use_default_badwordsids from kobold presets

* Renamed ChatML to Mistral-OpenOrca (ChatML)

* Renamed Mistral-OpenOrca (removed ChatML)

* Removed single_line from kobold presets

* Removed obsolete use_stop_sequence setting

* Ban EOS Token off by default

* Split AI Resp. Conf. in global and preset-specific settings

* Resolve conflicts

* Fix title

* Add translations for new help texts

* Fix i18n.json whitespace

* Make Mistral-OpenOrca system prompt more generic

* Renamed "Mistral-OpenOrca" to "ChatML" again

* More (UI) fixes and improvements

* Sendas hint fixed

---------

Co-authored-by: Cohee <18619528+Cohee1207@users.noreply.github.com>
2023-10-07 19:25:36 +03:00
Cohee ae4a9a7b14 Remove legacy chat lazy load 2023-09-21 22:07:56 +03:00
Cohee 2c84c93f3d Add thumbnails quality config 2023-09-16 21:53:30 +03:00
Cohee d34f7d3e1a Replace multigen with auto-continue 2023-09-15 21:34:41 +03:00
Cohee 6ad786f348 Add alternative local vectors source.
x5 speed boost!!
2023-09-14 23:40:13 +03:00
Cohee 0cc048cb64 Refactor transformers.js usage 2023-09-14 23:12:33 +03:00
Cohee dc4a6e862b Add local caption pipeline to UI plugin 2023-09-12 00:15:21 +03:00
Cohee c76c76410c Add ability to override local classification model 2023-09-11 01:25:22 +03:00
Cohee 322511caa9 Remove legacy Pygmalion formatting, part 2 2023-09-06 14:19:29 +03:00
Cohee 5ef79bd64d Remove NSFW avoidance prompt from Prompt Manager 2023-09-05 18:14:56 +03:00
kingbri fce57b41dd Config: Indent by 4 spaces
2 spaces is too small for a config file like this.

Signed-off-by: kingbri <bdashore3@proton.me>
2023-08-31 00:43:45 -04:00
kingbri 4e553cf6ab Server: Allow appending of additional headers for local backends
This is a useful feature for those who want to utilize APIs with
proxy middleware for adding extra features or security. For cloud
API safety and abiding by rate limits, this feature only applies to
local backends such as ooba or kobold.

Signed-off-by: kingbri <bdashore3@proton.me>
2023-08-31 00:15:07 -04:00
Cohee 44f88c61ff Add simplified UI switch 2023-08-29 18:04:10 +03:00
Stefan Daniel Schwarz 1d7165c047 context template preset manager 2023-08-26 12:09:47 +02:00
Stefan Daniel Schwarz 3e0ce12b23 first_output_sequence and system_sequence_prefix 2023-08-25 22:34:08 +02:00
Cohee f5fd15ffd2 #976 Return "Continue on send". Allow continuing the first chat message. Add debug function for i18n. 2023-08-24 15:13:04 +03:00
Cohee e2507e8840 #976 Add "quick continue" button. Remove "send to continue" option. 2023-08-24 01:37:44 +03:00
Cohee 1ce848c1c3 Move before / after char WI to story strings 2023-08-24 00:26:47 +03:00
Stefan Daniel Schwarz dd7b21c63d renamed roleplay instruct preset 2023-08-23 22:23:51 +02:00
Stefan Daniel Schwarz 252be20c16 Return of the Roleplay Context 2023-08-22 23:40:47 +02:00
Cohee d6b06d5828
Merge pull request #994 from StefanDanielSchwarz/roleplay-preset-hotfix
Roleplay Preset Hotfix
2023-08-22 02:41:32 +03:00
Cohee 57b126bfbf Save chat completions settings to an object. Update numeric setting types 2023-08-22 00:35:46 +03:00
Cohee 75db476f76 Update default settings 2023-08-22 00:24:29 +03:00
Stefan Daniel Schwarz c6ce06b339 Put "### Input:" back into Roleplay system prompt 2023-08-21 23:24:24 +02:00
Stefan Daniel Schwarz 9df4c51b07 Instruct Mode improvements 2023-08-21 22:32:58 +02:00
Stefan Daniel Schwarz 7b8d10d25e Added Include Names and Forced Names to all instruct presets 2023-08-20 16:10:22 +02:00
Cohee 80092b3170 #790 Simplify local prompt formatting. Use handlebars to render story string. 2023-08-17 22:47:34 +03:00
Cohee 81ed4d8431 Reorderable samplers for Novel 2023-08-16 20:34:47 +03:00
Mike Weldon feb523bd01 NovelAI Kayra 1.1 update
* Updated some presets and added Cosmic Cube
* Change defaults for NovelAI to select Clio on cold start
* Automatically change the preset to an appropriate default whenever you change the model
* Removed deprecated Top G sampler
2023-08-15 18:52:29 -07:00
Mike Weldon 41ec7e5600 Many NovelAI fixes from dev guidance
* Remove AI Module "Autoselect" and make the auto-instruct work for all modules. This is how NAI is supposed to work.
* Log the response from the API.
* Move the AI Module setting up to the top of the settings window since it isn't part of the preset.
* Refactor phrase_rep_pen to use the actual API strings.
* Clamp the maximum token length to 150 before we call the API.
* Clamp the minimum token length in the UX from 1 to 150.
* Fix bug where the preamble was not initialized on cold start.
* Get rid of extra newline before dinkus.
* Make always_force_name2 default true.
2023-08-14 19:35:21 -07:00
Stefan Daniel Schwarz b407fe2388 custom_stopping_strings_macro toggleable option 2023-08-04 16:53:49 +02:00
Cohee 84283bc2b4 Add "Best match" tokenizer option 2023-08-04 14:17:05 +03:00