Click slider numbers to input manually.

Kobold Presets ?

NovelAI Presets ?

Chat Completion Presets

Text Gen WebUI (ooba) presets


Response Length (tokens)
select
Context Size (tokens)
select
Only select models support context sizes greater than 2048 tokens. Increase only if you know what you're doing.

Temperature
select
Rep. Pen.
select
Rep. Pen. Range
select

Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Temperature
select
Rep. Pen.
select
Rep. Pen. Range.
select
Rep. Pen. Slope
select
Rep. Pen. Freq.
select
Rep. Pen. Presence
select
Tail Free Sampling
select
Temperature
select
Rep. Pen.
select
Rep. Pen. Range
select
Encoder Rep. Pen.
select
No Repeat Ngram Size
select
Min Length
select
OpenAI / Claude Reverse Proxy
Alternative server URL (leave empty to use the default value).
Remove your real OAI API Key from the API panel BEFORE typing anything into this box.
We cannot provide support for problems encountered while using an unofficial OpenAI proxy.
Proxy Password
Will be used as a password for the proxy instead of API key.
Enable this if the streaming doesn't work with your proxy.
Unrestricted maximum value for the context size slider. Enable only if you know what you're doing.
Context Size (tokens)
select
Max Response Length (tokens)
Max prompt cost: Unknown

Temperature
select
Frequency Penalty
select
Presence Penalty
select
Top K
select
Top P
select

Top P
select
Top A
select
Top K
select
Typical Sampling
select
Tail Free Sampling
select
Rep. Pen. Slope
select
Samplers Order
Samplers will be applied in a top-down order. Use with caution.
Top K 0
Top A 1
Top P 2
Tail Free Sampling 3
Typical Sampling 4
Temperature 5
Repetition Penalty 6
Preamble
Use style tags to modify the writing style of the output
Top P
select
Top A
select
Top K
select
Top G
select
Mirostat Tau
select
Mirostat LR
select
Typical P
select
CFG Scale
select
Phrase Repetition Penalty
select
Min Length
select
Top K
select
Top P
select
Typical P
select
Top A
select
Tail Free Sampling
select
Epsilon Cutoff
select
Eta Cutoff
select
Add the bos_token to the beginning of prompts. Disabling this can make the replies more creative.
Ban the eos_token. This forces the model to never end the generation prematurely.

Beam search

Number of Beams
select
Length Penalty
select

Contrastive search

Penalty Alpha
select

Mirostat (mode=1 is only for llama.cpp)

Mirostat Mode
select
Mirostat Tau
select
Mirostat Eta
select

Seed
Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Wrap entire user message in quotes before sending.
Leave off if you use quotes manually for speech.
Replace empty message
Send this text instead of nothing when the text box is empty.

Main prompt
Overridden by the Character Definitions.
The main prompt used to set the model behavior
NSFW prompt
NSFW avoidance prompt
Prompt that is used when the NSFW toggle is OFF
Assistant Prefill
Advanced prompt bits
Impersonation prompt
Prompt that is used for Impersonation function
World Info format template
Wraps activated World Info entries before inserting into the prompt. Use {0} to mark a place where the content is inserted.
Logit Bias
Helps to ban or reinforce the usage of certain tokens. Confirm token parsing with Tiktokenizer.
View / Edit bias preset
Add bias entry

API

API key

Get it here: Register (View my Kudos)
Enter 0000000000 to use anonymous mode.
For privacy reasons, your API key will be hidden after you reload the page.

Models

Not connected

API url

Example: http://127.0.0.1:5000/api
Not connected
View hidden API keys

Advanced Formatting ?

AutoFormat Overrides

Custom Chat Separator

Non-markdown strings

Instruct mode ?

Overridden by the Character Definitions.

Context Formatting

Tokenizer ?

Token Padding ?

Context Templates

Start Reply With

Custom Stopping Strings (KoboldAI/TextGen/NovelAI)
JSON serialized array of strings, for example:
["\n", "\nUser:", "\nChar:"]

Pygmalion Formatting

Multigen ?

Worlds/Lorebooks ?

Active World(s) for all chats
Scan Depth
depth
Context %
budget
Budget Cap
0
(0 = disabled)

World/Lore Editor ?

 Editing:

User Settings

UI Colors

Main Text
Italics Text
Quote Text
Text Shadow
UI Background
User Message
AI Message
Font Scale
select
Blur Strength
select
Text Shadow Width
select

UI Theme Preset

MovingUI Preset

UI Customization

Chat Width (PC)
25% 50% 75%
Lazy Chat Loading
# of messages (0 = disabled)
0 50 100
Avatar Style:
Message Style:

Send on Enter

Power User Options

Auto-swipe
Minimum generated message length
Blacklisted words
Blacklisted word count to swipe

Extensions API: SillyTavern-extras

Not Connected

Persona Management

How do I use this?

Name

Persona Description

Your Persona

+

text

- Advanced Definitions

Prompt Overrides (For OpenAI/Claude/Scale APIs, Window/OpenRouter, and Instruct mode)

Insert {{original}} into either box to include the respective default prompt from system settings.

Main Prompt

Jailbreak


Creator's Metadata (Not sent with the AI Prompt)

Everything here is optional

Created by

Character Version

Creator's Notes

Tags to Embed


Personality summary ?

Scenario ?

Talkativeness

How often the character speaks in  group chats!
Shy Normal Chatty

Examples of dialogue

Important to set the character's writing style. ?

Chat History ?

Context Template Editor

Substitution Parameters
Click to copy.
  • {{char}} - current character name
  • {{user}} - current user name
  • {{description}} - character description
  • {{scenario}} - character or group scenario
  • {{personality}} - character personality
  • {{mesExamples}} - message examples
  • {{wiBeforeCharacter}} - activated World Info entries (Before Char)
  • {{wiAfterCharacter}} - activated World Info entries (After Char)
  • {{instructSystemPrompt}} - system prompt (Instruct mode only)
Story String Template
Lines containing parameters resolving to an empty value will be removed from the template string.
Chat Injections

Chat Scenario Override

Unique to this chat. All group members will use the following scenario text instead of what is specified in their character cards. The following scenario text will be used instead of the value set in the character card. Bookmarks inherit the scenario override from their parent, and can be changed individually after that.

Select a World Info file for :

Primary Lorebook

A selected World Info will be bound to this character as its own Lorebook. When generating an AI reply, it will be combined with the entries from a global World Info selector. Exporting a character would also export the selected Lorebook file embedded in the JSON data.

Additional Lorebooks

Associate one or more auxillary Lorebooks with this character.
NOTE: These choices are optional and won't be preserved on character export!
 entries
Probability:


Chat History ?
${characterName}
img1
img1 img2
img1 img2 img3
img1 img2 img3 img4

Welcome to SillyTavern!

Before you get started, you must select a user name. This can be changed at any time via the icon.

User Name:

Avatar

Alternate Greetings

These will be displayed as swipes on the first message when starting a new chat. Group members can select one of them to initiate the conversation.

Click the button to get started!
Alternate Greeting #
CHAR is typing
Author's Note
Unique to this chat.
Bookmarks inherit the Note from their parent, and can be changed individually after that.
Tokens: 0
(0 = Disable, 1 = Always)
User inputs until next insertion: (disabled)

Character Author's Note
Will be automatically added as the author's note for this character. Will be used in groups, but can't be modified when a group chat is open.
Tokens: 0

Default Author's Note
Will be automatically added as the Author's Note for all new chats.
Tokens: 0
(0 = Disable, 1 = Always)
PNG
JSON
WEBP
User Avatar