Temperature
-
- Value from 0.1 to 2.0.
- Less value - the answers are more logical, but less creative.
- More value - the answers are more creative, but less logical.
-
From 9f2e669ab9668dbd85a6ed5c168a506191dfbb1b Mon Sep 17 00:00:00 2001
From: Grzegorz Gidel
- Value from 0.1 to 2.0.
For most Kobold's models the easiest way is to use a free form for description, and in each sentence it is desirable to specify the name of the character. For import Character.ai chats use tool: https://github.com/0x000011b/characterai-dumper To import Character.AI chats, use this tool: https://github.com/0x000011b/characterai-dumper.
- A supplemental text comment for the your convenience, which is not utilized by the AI.
+ A supplemental text comment for your convenience, which is not utilized by the AI.
- A brief description of the personality. It is added to the chat to a depth of 8-15 messages, so it has a significant impact on the character.
+ A brief description of the personality. It is added to the chat at a depth of 8-15 messages, so it has a significant impact on the character.
The range of influence of Repetition penalty in tokens. The maximum amount of tokens that a AI will generate to respond. One word is approximately 3-4 tokens.
+ The maximum amount of tokens that the AI will generate to respond. One word is approximately 3-4 tokens.
The larger the parameter value, the longer the generation time takes. How much will the AI remember. Context size also affects the speed of generation.
This setting controls how much of the text generated is based on the most likely options.
- The top P words with the highest probabilities are considered. A word is then chosen at random, with a
- higher chance of selecting words with higher probabilities.
+ Only words with the highest probabilities, together summing up to P, are considered. A word is then
+ chosen at random, with a higher chance of selecting words with higher probabilities.
Set value to 1 to disable its effect.
diff --git a/public/notes/8.html b/public/notes/8.html
index 448ab8824..a97213a02 100644
--- a/public/notes/8.html
+++ b/public/notes/8.html
@@ -12,7 +12,7 @@
If your subscribe tier is Paper, Tablet or Scroll use only Euterpe model otherwise you can not get an answer from NovelAI api. If your subscription tier is Paper, Tablet or Scroll use only Euterpe model otherwise you can not get an answer from NovelAI API.
- Character Anchor - affects the character played by the AI by motivating him to write longer messages.
- The second anchor is only turned on after 8-12 messages, because when the chat still only has a few message the first anchor creates enough effect ob its own.
+ The second anchor is only turned on after 8-12 messages, because when the chat still only has a few messages, the first anchor creates enough effect on its own.
- Sometimes an AI model may not perceive anchors correctly or the AI model already generates sufficiently long messages.
diff --git a/public/notes/advanced_formatting.html b/public/notes/advanced_formatting.html
index 565ddf999..2b6163952 100644
--- a/public/notes/advanced_formatting.html
+++ b/public/notes/advanced_formatting.html
@@ -17,8 +17,8 @@
- The settings provided in this section allow for a more control over the prompt building strategy.
- Most specifics of the prompt building depend on whether a Pygmalion model is selected or special formatting is force enabled.
+ The settings provided in this section allow for more control over the prompt building strategy.
+ Most specifics of the prompt building depend on whether a Pygmalion model is selected or special formatting is force-enabled.
The core differences between the formatting schemas are listed below.
-
-
-
@@ -45,7 +45,7 @@
-
-
-
@@ -72,12 +72,12 @@
-
- Appends character's name to the prompt to force model to complete the message as a character:
+ Appends character's name to the prompt to force the model to complete the message as the character:
TLDR: If you're working with an AI model with a 2048 context token limit, your 1000 token character definition is cutting the AI's 'memory' in half. TL;DR: If you're working with an AI model with a 2048 context token limit, your 1000 token character definition is cutting the AI's 'memory' in half. To put this in perspective, a decent response from a good AI can easily be around 200-300 tokens. In this case, the AI would only be able to 'remember' about 3 exchanges worth of chat history. When we see your character has over 1000 tokens in its definitions, we highlight it for you because this can lower the AI's capabilities to provide an enjoyable conversation. Don't Worry - it won't break anything. At worst, if the Character's permanent tokens are too large, it simply means there will be less room left in the context for other things (see below). Don't worry - it won't break anything. At worst, if the Character's permanent tokens are too large, it simply means there will be less room left in the context for other things (see below). The only negative side effect this can have is the AI will have less 'memory', as it will have less chat history available to process. This is because every AI model has a limit to the amount of context it can process at one time.
A list of tags that are replaced when sending to generate:
+ A list of tags that are replaced when sending to generate:
A list of tags that are replaced when sending to generate: *In Pygmalion model, it is used as a "Personality:" graph
- List of tags that are replaced when sending to generate:
- A list of tags that are replaced when sending to generate:Temperature
-
- Less value - the answers are more logical, but less creative.
- More value - the answers are more creative, but less logical.
-
The entire description should be in one line without hyphenation.
- For examle:
+ For example:
Chloe is a female elf. Chloe wears black-white maid dress with green collar and red glasses. Chloe has medium length black hair. Chloe's personality is...
diff --git a/public/notes/10.html b/public/notes/10.html
index f42bcb1f6..1f2415cdf 100644
--- a/public/notes/10.html
+++ b/public/notes/10.html
@@ -13,7 +13,7 @@
Chat import
Import chats into TavernAI
-
Constant
diff --git a/public/notes/2.html b/public/notes/2.html
index 2c4eef241..29bc8e551 100644
--- a/public/notes/2.html
+++ b/public/notes/2.html
@@ -13,7 +13,7 @@
Personality summary
- *I noticed you came inside, I walked up and stood right in front of you* Wellcome. I'm glad to see you here.
- *i said with toothy smug sunny smile looking you straight in the eye* What brings you...
+ *I noticed you came inside, I walked up and stood right in front of you* Welcome. I'm glad to see you here.
+ *I said with toothy smug sunny smile looking you straight in the eye* What brings you...
diff --git a/public/notes/4.html b/public/notes/4.html
index ecdd5d4e9..8ee6352bb 100644
--- a/public/notes/4.html
+++ b/public/notes/4.html
@@ -30,11 +30,11 @@
Repetition penalty range
Amount generation
- Context size
- Important: The setting of Context Size in TavernAI GUI override setting for KoboldAI GUI
+ Important: The setting of Context Size in TavernAI GUI overrides the setting for KoboldAI GUI
Advanced Settings
@@ -51,8 +51,8 @@
Top P Sampling
NovelAI Models
-
+ Character Anchor - affects the character played by the AI by motivating it to write longer messages.
Looks like:
[Elaborate speaker]
+ Sometimes an AI model may not perceive anchors correctly or the AI model already generates sufficiently long messages.
For these cases, you can disable the anchors by unchecking their respective boxes.
Advanced Formatting
Custom Chat Separator
@@ -28,15 +28,15 @@
For Pygmalion formatting
Disable description formatting
NAME's Persona:
won't be prepended to the content your character's Description box.
+ NAME's Persona:
won't be prepended to the content of your character's Description box.
Disable scenario formatting
Scenario:
won't be prepended to the content your character's Scenario box.
+ Scenario:
won't be prepended to the content of your character's Scenario box.
Disable personality formatting
Personality:
won't be prepended to the content your character's Personality box.
+ Personality:
won't be prepended to the content of your character's Personality box.
Disable example chats formatting
Disable chat start formatting
<START>
is not added before the between the character card and the chat log.
+ <START>
is not added between the character card and the chat log.
(If custom separator is not set)
Always add character's name to prompt
@@ -59,11 +59,11 @@
Disable scenario formatting
Circumstances and context of the dialogue:
won't be prepended to the content your character's Scenario box.
+ Circumstances and context of the dialogue:
won't be prepended to the content of your character's Scenario box.
Disable personality formatting
NAME's personality:
won't be prepended to the content your character's Personality box.
+ NAME's personality:
won't be prepended to the content of your character's Personality box.
Disable example chats formatting
Disable chat start formatting
Then the roleplay chat between User and Character begins
is not added before the between the character card and the chat log.
+ Then the roleplay chat between User and Character begins
is not added between the character card and the chat log.
(If custom separator is not set)
Always add character's name to prompt
diff --git a/public/notes/group_reply_strategy.html b/public/notes/group_reply_strategy.html
index 479c0b8e0..a23a8c58c 100644
--- a/public/notes/group_reply_strategy.html
+++ b/public/notes/group_reply_strategy.html
@@ -1,7 +1,7 @@
-
Character Tokens
-
@@ -23,7 +23,7 @@
What happens if my Character has too many tokens?
- 'Context'?
From d6bbc56b8f1a1749d6c079054831361595f602bd Mon Sep 17 00:00:00 2001
From: Grzegorz Gidel
-
- {{user}} and <USER> : replaced by the User's Name
- {{char}} and <BOT> : replaced by the Character's Name
+ {{user}} and <USER> are replaced by the User's Name
+ {{char}} and <BOT> are replaced by the Character's Name
<START>
{{user}}: Hello
{{char}}: *excitedly* Hello there, dear! Are you new to Axel? Don't worry, I, Aqua the goddess of water, am here to help you! Do you need any assistance? And may I say, I look simply radiant today! *strikes a pose and looks at you with puppy eyes*
-
A list of tags that are replaced when sending to generate:
- {{user}} and <USER> are replaced by User Name
- {{char}} and <BOT> are replaced by Character Name
- *for Pygmalion "{{user}}:" and "<USER>:" will be replaced by "You:"
+
+
+ {{user}} and <USER> are replaced by the User's Name
+ {{char}} and <BOT> are replaced by the Character's Name
+
- {{user}} and <USER> : replaced by User Name
- {{char}} and <BOT> : replaced by Character Name
+ {{user}} and <USER> are replaced by the User's Name
+ {{char}} and <BOT> are replaced by the Character's Name
-
- {{user}} and <USER> : replaced by the User's Name
- {{char}} and <BOT> : replaced by the Character's Name
+ A list of tags that are replaced when sending to generate:
+ {{user}} and <USER> are replaced by the User's Name
+ {{char}} and <BOT> are replaced by the Character's Name
- {{user}} and <USER> are replaced by User Name
- {{char}} and <BOT> are replaced by Character Name
+ A list of tags that are replaced when sending to generate:
+ {{user}} and <USER> are replaced by the User's Name
+ {{char}} and <BOT> are replaced by the Character's Name
- <START>
is not added at the beginning of each example message block.
+ <START>
won't be added at the beginning of each example message block.
(If custom separator is not set)
- <START>
is not added between the character card and the chat log.
+ <START>
won't be added between the character card and the chat log.
(If custom separator is not set)
- This is how Character should talk
is not added at the beginning of each example message block.
+ This is how Character should talk
won't be added at the beginning of each example message block.
(If custom separator is not set)
- Then the roleplay chat between User and Character begins
is not added between the character card and the chat log.
+ Then the roleplay chat between User and Character begins
won't be added between the character card and the chat log.
(If custom separator is not set)
To use streaming with Text Generation Web UI, a Gradio function index needs to be provided. It is impossible to be determined programmatically and should be typed in manually.