Click slider numbers to input manually.

Kobold Presets ?

NovelAI Presets ?

Chat Completion Presets

Text Gen WebUI (ooba/Mancer) presets


AI Module
Changes the style of the generated text.
Response Length (tokens)
select
Context Size (tokens)
select
Only select models support context sizes greater than 4096 tokens. Increase only if you know what you're doing.

Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Temperature
select
Repetition Penalty
select
Repetition Penalty Range
select
Repetition Penalty Slope
select
Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Temperature
select
Repetition Penalty
select
Repetition Penalty Range
select
Repetition Penalty Slope
select
Repetition Penalty Frequency
select
Repetition Penalty Presence
select
Tail Free Sampling
select
Phrase Repetition Penalty
Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Temperature
select
Repetition Penalty
select
Repetition Penalty Range
select
Encoder Repetition Penalty
select
No Repeat Ngram Size
select
Min Length
select
Unrestricted maximum value for the context size slider. Enable only if you know what you're doing.
Context Size (tokens)
select
Max Response Length (tokens)
Max prompt cost: Unknown

Display the response bit by bit as it is generated.
When this is off, responses will be displayed all at once when they are complete.
Temperature
select
Frequency Penalty
select
Presence Penalty
select
Count Penalty
select
Top K
select
Top P
select
Quick Prompts Edit
Main
NSFW
Jailbreak
Assistant Prefill
Utility Prompts
Impersonation prompt
Prompt that is used for Impersonation function
World Info format template
Wraps activated World Info entries before inserting into the prompt. Use {0} to mark a place where the content is inserted.
New Chat
Set at the beginning of the chat history to indicate that a new chat is about to start.
New Group Chat
Set at the beginning of the chat history to indicate that a new group chat is about to start.
New Example Chat
Set at the beginning of Dialogue examples to indicate that a new example chat is about to start.
Continue nudge
Set at the end of the chat history when the continue button is pressed.
Replace empty message
Send this text instead of nothing when the text box is empty.

OpenAI / Claude Reverse Proxy
Alternative server URL (leave empty to use the default value).
Remove your real OAI API Key from the API panel BEFORE typing anything into this box.
We cannot provide support for problems encountered while using an unofficial OpenAI proxy.
Proxy Password
Will be used as a password for the proxy instead of API key.
Enable this if the streaming doesn't work with your proxy.

Top P
select
Top A
select
Top K
select
Typical Sampling
select
Tail Free Sampling
select
Generate only one line per request (KoboldAI only, ignored by KoboldCpp).
Ban the End-of-Sequence (EOS) token (with KoboldCpp, and possibly also other tokens with KoboldAI).
Good for story writing, but should not be used for chat and instruct mode.

Mirostat

Mirostat Mode
A value of 0 disables Mirostat entirely.
1 is for Mirostat 1.0, and 2 is for Mirostat 2.0
select
Mirostat Tau
Controls variability of Mirostat outputs
select
Mirostat Eta
Controls learning rate of Mirostat
select

Grammar

Type in the desired custom grammar (GBNF).

Samplers Order
Samplers will be applied in a top-down order. Use with caution.
Top K 0
Top A 1
Top P 2
Tail Free Sampling 3
Typical P Sampling 4
Temperature 5
Repetition Penalty 6
Preamble
Use style tags to modify the writing style of the output.
Banned Tokens
Sequences you don't want to appear in the output. One per line. Text or [token ids].
Logit Bias
Add
Helps to ban or reinforce the usage of certain tokens.

CFG Scale
select
Negative Prompt
Used if CFG Scale is unset globally, per chat or character
Top P
select
Top A
select
Top K
select
Mirostat Tau
select
Mirostat LR
select
Typical P
select
Min Length
select
Samplers Order
Samplers will be applied in a top-down order. Use with caution.
Temperature 0
Top K Sampling 1
Nucleus Sampling 2
Tail Free Sampling 3
Top A Sampling 4
Typical Sampling 5
CFG 6
Mirostat 8
Top K
select
Top P
select
Typical P
select
Top A
select
Tail Free Sampling
select
Epsilon Cutoff
select
Eta Cutoff
select
Add the bos_token to the beginning of prompts. Disabling this can make the replies more creative.
Ban the eos_token. This forces the model to never end the generation prematurely.

Banned Tokens (LLaMA models)

Sequences you don't want to appear in the output. One per line. Text or [token ids].
  Most tokens have a leading space.

CFG Scale

select
Negative Prompt
Used if CFG Scale is unset globally, per chat or character

Beam search

Number of Beams
select
Length Penalty
select

Contrastive search

Penalty Alpha
select

Mirostat (mode=1 is only for llama.cpp)

Mirostat Mode
select
Mirostat Tau
select
Mirostat Eta
select

Grammar

Type in the desired custom grammar (GBNF).

Seed
Wrap entire user message in quotes before sending.
Leave off if you use quotes manually for speech.
Send names in the ChatML objects. Helps the model to associate messages with characters.
Use the appropriate tokenizer for Jurassic models, which is more efficient than GPT's.
Exclude the assistant suffix from being added to the end of prompt (Requires jailbreak with 'Assistant:' in it).
Logit Bias
Helps to ban or reinforce the usage of certain tokens. Confirm token parsing with Tiktokenizer.
View / Edit bias preset
Add bias entry

API

Context: --, Response: --

API key

Get it here: Register (View my Kudos)
Enter 0000000000 to use anonymous mode.
For privacy reasons, your API key will be hidden after you reload the page.

Models

Not connected...

API url

Example: http://127.0.0.1:5000/api
Not connected...
View hidden API keys

Advanced Formatting ?

Context Template

Instruct Mode ?

Overridden by the Character Definitions.
Instruct Mode Sequences

Tokenizer ?

Token Padding ?

Start Reply With

Non-markdown strings

Custom Stopping Strings ?
JSON serialized array of strings, for example:
["\n", "\nUser:", "\nChar:"]

Auto-Continue

Worlds/Lorebooks ?

Active World(s) for all chats
Scan Depth
depth
Context %
budget
Budget Cap
0
(0 = disabled)

or

User Settings

Language:

Theme Preset

Theme Settings

Avatars:
Chat Style:
Main Text
Italics Text
Quote Text
Text Shadow
Chat Background
UI Background
UI Border
User Message
AI Message
Chat Width (PC)
select
Font Scale
select
Blur Strength
select
Text Shadow Width
select

Theme Toggles

Miscellaneous

MUI Preset:

Custom CSS

Character Handling

Chat/Message Handling

Enter to Send:
Auto-swipe
Minimum generated message length
Blacklisted words
Blacklisted word count to swipe

Extensions API: SillyTavern-extras

Not connected...

Persona Management

How do I use this?

Name

Persona Description

Tokens: 0

Your Persona

+

text

- Advanced Definitions

Prompt Overrides (For OpenAI/Claude/Scale APIs, Window/OpenRouter, and Instruct Mode)

Insert {{original}} into either box to include the respective default prompt from system settings.

Main Prompt

Tokens: counting...

Jailbreak

Tokens: counting...

Creator's Metadata (Not sent with the AI Prompt)

Everything here is optional

Created by

Character Version

Creator's Notes

Tags to Embed


Personality summary ?

Tokens: counting...

Scenario ?

Tokens: counting...

Character's Note

@ Depth

Tokens: counting...

Talkativeness

How often the character speaks in  group chats!
Shy Normal Chatty

Examples of dialogue

Important to set the character's writing style. ?
Tokens: counting...

Chat History ?

Chat Scenario Override

Unique to this chat. All group members will use the following scenario text instead of what is specified in their character cards. The following scenario text will be used instead of the value set in the character card. Bookmarks inherit the scenario override from their parent, and can be changed individually after that.

Select a World Info file for :

Primary Lorebook

A selected World Info will be bound to this character as its own Lorebook. When generating an AI reply, it will be combined with the entries from a global World Info selector. Exporting a character would also export the selected Lorebook file embedded in the JSON data.

Additional Lorebooks

Associate one or more auxillary Lorebooks with this character.
NOTE: These choices are optional and won't be preserved on character export!
 entries
Comma separated (required) Primary Keywords
Logic
(ignored if empty) Optional Filter
${characterName}
img1
img1 img2
img1 img2 img3
img1 img2 img3 img4

Welcome to SillyTavern!

SillyTavern is aimed at advanced users.
If you're new to this, enable the simplified UI mode below.
Before you get started, you must select a user name. This can be changed at any time via the icon.

User Name:

Avatar

Alternate Greetings

These will be displayed as swipes on the first message when starting a new chat. Group members can select one of them to initiate the conversation.

Click the button to get started!
Alternate Greeting #
CHAR is typing
Author's Note
Unique to this chat.
Bookmarks inherit the Note from their parent, and can be changed individually after that.
Tokens: 0
(0 = Disable, 1 = Always)
User inputs until next insertion: (disabled)

Character Author's Note (Private) Won't be shared with the character card on export.
Will be automatically added as the author's note for this character. Will be used in groups, but can't be modified when a group chat is open.
Tokens: 0

Default Author's Note
Will be automatically added as the Author's Note for all new chats.
Tokens: 0
(0 = Disable, 1 = Always)
PNG
JSON
User Avatar