- Strip url of trailing slashes, this caused issues when your api url
ended with a slash (such as the default! at least i think)
- Notify the user if they didn't call sd webui with the --api argument,
which is required to use the api
- Don't try to write to non-existant art directory, just send b64 image
like the diffusers function does it. In the future we probably want to
save these, but it would be in standardized manner most likely tied to
story files somehow
The method lua_compute_context no longer ignores its arguments.
This restores support for overriding the submitted text and
allowed WI entries and folders to lua_compute_context.
Add optional allowed_wi_entries and allowed_wi_folders parameters to
calc_ai_text. These allow restricting which WI is added to the
context. The default is no restriction.
Fix an issue in which streaming tokens were returned by calc_ai_text.
This fixes an issue in which lua_compute_context would receive
streamed tokens (only if token streaming was turned on).
Add docstrings to calc_ai_text and to_sentences.
Improve the regular expression used to split actions by sentence.
The new regular expression allows quotation marks after punctuation
to be attached to the sentence.
Improve the interaction between AN and WI depth and actions split
by sentence. The author's note was added after the sentence
that pushed the action count up to or greater than the depth.
This included "empty" actions from undoing actions, as well as
continuation actions. As a result, the author's note text
was often inserted at the end of the context, which often prevented
the model from generating coherent continuation text.
Consider sentence count in addition to action count for AN and WI depth.
Ignore completely empty actions when determining AN and WI depth.
Insert AN text before the sentence that crosses the AN depth, instead of
after the sentence. This means that at least one sentence from the
action alwasy appears at the end of the context, which gives the AI
something to continue from.
A few extremely minor optimizations from reducing redundant work.
Pre-compile the sentence splitting regular expression.
Don't join action sentences multiple times just to compute their
length.
Fix a few typos and remove some commented out code.
- Refactored new configuration vars to be used directly instead of making copies in the function.
- Rewrote post assembly for text2img_api so that it passes values correctly.
Signed-off-by: viningr
koboldai_vars.img_gen_art_guide
koboldai_vars.img_gen_negative_prompt (local SD API only until Horde supports passing negative prompts.)
koboldai_vars.img_gen_steps
koboldai_vars.img_gen_cfg_scale
Added toggle for the image_generating flag to allow resetting it in the case of interruption without needing to restart KoboldAI.
- Found error in payload format for SD api post request and resolved by consolidating prompt and settings values into single object.
- Refactored new variables to be accessed directly in functions instead of copying to local vars.
- New variables are now working correctly with SD api requests. Need testing with horde and local.
Signed-off-by: viningr
- Using Local SD-WebUI API option will send a post request to stable-diffusion-webui running with the --api switch on http://127.0.0.1:7860/sdapi/v1/txt2img
Signed-off-by: Robert Vining
- sends prompt to locally hosted stable-diffusion-webui API and retrieves
image result. info from SD API is embedded in png file as it is saved.
image files are named with the current date at generation and saved
in /stories/art/
Modified UI_2_generate_image() to call text2img_api() instead of text2img_horde
Signed-off-by: Robert Vining <offers@robertrvining.com>
Add the PhraseBiasLogitsProcessor to the logits processor list
Fix an issue with bias phrases that contain the start token
multiple times. Because we were searching backwards for the first
occurrence of the start token, we would restart the phrase when
we encountered a subsequent instance of the token. We now search
forwards from the maximum possible overlap to find the maximum overlap.
Fix an issue with the phrase bias token index not accounting for
non-matching words. Previously, once we found the start token,
we would apply the bias for each token in the bias phrase even if
subsequent tokens in the context didn't match the bias phrase.
Do not apply phrase completion if the bias score is negative.
If multiple phrases apply a score modifier to the same token, add
the scores rather than replacing the modifier with the last occurrence.
Increase the maximum range of the bias slider. For extremely
repetitive text on large models, -12 is insufficient to break the
model out of its loop. -50 to 50 is potentially excessive, but it's
safer to give the user some additional control over the bias score.