mirror of
https://github.com/SillyTavern/SillyTavern.git
synced 2025-06-05 21:59:27 +02:00
66 lines
3.3 KiB
HTML
66 lines
3.3 KiB
HTML
<html>
|
|
|
|
<head>
|
|
<title>Character Tokens</title>
|
|
<link rel="stylesheet" href="/css/notes.css">
|
|
<meta charset="UTF-8">
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
<link href="/webfonts/NotoSans/stylesheet.css" rel="stylesheet">
|
|
</head>
|
|
|
|
<body>
|
|
<div id="main">
|
|
<div id="content">
|
|
<h2>Character Tokens</h2>
|
|
|
|
<p><b>TL;DR: If you're working with an AI model with a 2048 context token limit, your 1000 token character definition is cutting the AI's 'memory' in half.</b></p>
|
|
<p>To put this in perspective, a decent response from a good AI can easily be around 200-300 tokens. In this case, the AI would only be able to 'remember' about 3 exchanges worth of chat history.</p>
|
|
<hr>
|
|
|
|
<h3>Why did my character's token counter turn red?</h3>
|
|
<p>When we see your character has over 1000 tokens in its definitions, we highlight it for you because this can lower the AI's capabilities to provide an enjoyable conversation.</p>
|
|
|
|
<h3>What happens if my Character has too many tokens?</h3>
|
|
<p>Don't worry - it won't break anything. At worst, if the Character's permanent tokens are too large, it simply means there will be less room left in the context for other things (see below).</p>
|
|
<p>The only negative side effect this can have is the AI will have less 'memory', as it will have less chat history available to process.</p>
|
|
<p>This is because every AI model has a limit to the amount of context it can process at one time.</p>
|
|
<h3>'Context'?</h3>
|
|
<p>This is the information that gets sent to the AI each time you ask it to generate a response:</p>
|
|
<ul>
|
|
<li>Character definitions</li>
|
|
<li>Chat history</li>
|
|
<li>Author's Notes</li>
|
|
<li>Special Format strings</li>
|
|
<li>[bracket commands]</li>
|
|
</ul>
|
|
|
|
<p>SillyTavern automatically calculates the best way to allocate the available context tokens before sending the information to the AI model.</p>
|
|
|
|
<h3>What are a Character's 'Permanent Tokens'?</h3>
|
|
<p>These will always be sent to the AI with every generation request:</p>
|
|
<ul>
|
|
<li>Character Name (keep the name short! Sent at the start of EVERY Character message)</li>
|
|
<li>Character Description Box</li>
|
|
<li>Character Personality Box</li>
|
|
<li>Scenario Box</li>
|
|
</ul>
|
|
<h3>What parts of a Character's Definitions are NOT permanent?</h3>
|
|
<ul>
|
|
<li>The first message box - only sent once at the start of the chat.</li>
|
|
<li>Example messages box - only kept until chat history fills up the context (optionally these can be forced to be kept in context)</li>
|
|
</ul>
|
|
|
|
<h3>Popular AI Model Context Token Limits</h3>
|
|
<ul>
|
|
<li>Older models below 6B parameters - 1024</li>
|
|
<li>Pygmalion 6B - 2048</li>
|
|
<li>Poe.com (Claude-instant or ChatGPT) - 2048</li>
|
|
<li>OpenAI ChatGPT - 4000-ish?</li>
|
|
<li>OpenAI GPT-4 - 8000?</li>
|
|
</ul>
|
|
|
|
</div>
|
|
</div>
|
|
</body>
|
|
|
|
</html> |