<p>Decodes the given token or list of tokens using the current tokenizer. If <code>kobold.backend</code> is <code>'readonly'</code> or <code>'api'</code>, the tokenizer used is the GPT-2 tokenizer, otherwise the model’s tokenizer is used. This function is the inverse of <code>kobold.encode()</code>.</p>
<p>Encodes the given string using the current tokenizer into an array of tokens. If <code>kobold.backend</code> is <code>'readonly'</code> or <code>'api'</code>, the tokenizer used is the GPT-2 tokenizer, otherwise the model’s tokenizer is used. This function is the inverse of <code>kobold.decode()</code>.</p>
<p>Returns a file handle representing your script’s configuration file, which is usually the file in the userscripts folder with the same filename as your script but with “.conf” appended at the end. This function throws an error on failure.</p>
<p>If the configuration file does not exist when this function is called, the configuration file will first be created as a new empty file.</p>
<p>If KoboldAI does not possess an open file handle to the configuration file, this function opens the file in <code>w+b</code> mode if the <code>clear</code> parameter is a truthy value, otherwise the file is opened in <code>r+b</code> mode. These are mostly the same – the file is opened in binary read-write mode and then seeked to the start of the file – except the former mode deletes the contents of the file prior to opening it and the latter mode does not.</p>
<p>If KoboldAI does possess an open file handle to the configuration file, that open file handle is returned without seeking or deleting the contents of the file. You can check if KoboldAI possesses an open file handle to the configuration file by using <code>kobold.is_config_file_open</code>.</p>
<li>clear? (<code>bool</code>): If KoboldAI does not possess an open file handle to the configuration object, this determines whether the file will be opened in <code>w+b</code> or <code>r+b</code> mode. This parameter defaults to <code>false</code>.</li>
</ul>
<h3id="returns-2">Returns:</h3>
<ul>
<li><code>file*</code>: File handle for the configuration file.</li>
<p>If called from an input modifier, prevents the user’s input from being sent to the model and skips directly to the output modifier.</p>
<p>If called from a generation modifier, stops generation after the current token is generated and then skips to the output modifier. In other words, if, when you call this function, <code>kobold.generated</code> has n columns, it will have exactly n+1 columns when the output modifier is called.</p>
<p>If called from an output modifier, has no effect.</p>
<p>After the current output is sent to the GUI, starts another generation using the empty string as the submission.</p>
<p>Whatever ends up being the output selected by the user or by the <code>sequence</code> parameter will be saved in <code>kobold.feedback</code> when the new generation begins.</p>
<h3id="parameters-3">Parameters:</h3>
<ul>
<li>sequence? (<code>integer</code>): If you have multiple Gens Per Action, this can be used to choose which sequence to use as the output, where 1 is the first, 2 is the second and so on. If you set this to 0, the user will be prompted to choose the sequence instead. Defaults to 0.</li>
<em><strong>Writable from:</strong></em> anywhere (triggers regeneration when written to from generation modifier)</p>
<p>The author’s note as set from the “Memory” button in the GUI.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<em><strong>Writable from:</strong></em> anywhere (triggers regeneration when written to from generation modifier)</p>
<p>The author’s note template as set from the “Memory” button in the GUI.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>Path to a directory that the user chose either via the file dialog that appears when KoboldAI asks you to choose the path to your custom model or via the <code>--path</code> command-line flag.</p>
<p>If the user loaded a built-in model from the menu, this is instead the model ID of the model on Hugging Face’s model hub, such as “KoboldAI/GPT-Neo-2.7B-Picard” or “hakurei/lit-6B”.</p>
<p>If this is a repeat generation caused by <code>kobold.restart_generation()</code>, this will be a string containing the previous output. If not, this will be <code>nil</code>.</p>
<p><em><strong>Readable from:</strong></em> generation modifier and output modifier, but only if <code>kobold.modelbackend</code> is not <code>'api'</code> or <code>'readonly'</code><br>
<p>Number of columns in <code>kobold.generated</code>. In other words, the number of tokens generated thus far, which is equal to the number of times that the generation modifier has been called, not including the current time if this is being read from a generation modifier.</p>
<p>If <code>kobold.modelbackend</code> is <code>'api'</code> or <code>'readonly'</code>, this returns 0 instead.</p>
<p>Whether or not KoboldAI possesses an open file handle to your script’s configuration file. See <code>kobold.get_config_file()</code> for more details.</p>
<p><em><strong>Readable from:</strong></em> generation modifier, but only if <code>kobold.modelbackend</code> is not <code>'api'</code> or <code>'readonly'</code><br>
<p>Two-dimensional array of <ahref="https://datascience.stackexchange.com/questions/31041/what-does-logits-in-machine-learning-mean">logits</a> prior to being filtered by top-p sampling, etc. Each row represents one sequence, each column one of the tokens in the model’s vocabulary. The ith column represents the logit score of token i-1, so if you want to access the logit score of token 18435 (" Hello" with a leading space), you need to access column 18436. You may alter this two-dimensional array to encourage or deter certain tokens from appearing in the output in a stochastic manner.</p>
<p>Don’t modify this table unnecessarily unless you know what you are doing! The bias example scripts show how to use this feature properly.</p>
<p>Number of columns in <code>kobold.logits</code>, equal to the vocabulary size of the current model. Most models based on GPT-2 (e.g. GPT-Neo and GPT-J) have a vocabulary size of 50257. GPT-J models in particular have a vocabulary size of 50400 instead, although GPT-J models aren’t trained to use the rightmost 143 tokens of the logits array.</p>
<p>If <code>kobold.modelbackend</code> is <code>'api'</code> or <code>'readonly'</code>, this returns 0 instead.</p>
<em><strong>Writable from:</strong></em> anywhere (triggers regeneration when written to from generation modifier)</p>
<p>The memory as set from the “Memory” button in the GUI.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>Number of rows in <code>kobold.outputs</code>. This is equal to <code>kobold.settings.numseqs</code> unless you’re using a non-Colab third-party API such as OpenAI or InferKit, in which case this is 1. If you decide to write to <code>kobold.settings.numseqs</code> from an output modifier, this value remains unchanged.</p>
<p>Model output before applying output formatting. One row per “Gens Per Action”, unless you’re using OpenAI or InferKit, in which case this always has exactly one row.</p>
<em><strong>Writable from:</strong></em> anywhere (does not affect other scripts when written to since each script has its own copy of this object)</p>
<p>Contains most of the settings. They have the same names as in gensettings.py, so the top-p value is <code>kobold.settings.settopp</code>.</p>
<p>All the settings can be read from anywhere and written from anywhere, except <code>kobold.settings.numseqs</code> which can only be written to from an input modifier or output modifier.</p>
<p>Modifying certain fields from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess. Currently, only the following fields and their aliases cause this to occur:</p>
<p>The name of the soft prompt file to use (as a string), including the file extension. If not using a soft prompt, this is <code>nil</code> instead.</p>
<p>You can also set the soft prompt to use by setting this to a string or <code>nil</code>.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>Contains the chunks of the current story. Don’t use <code>pairs</code> or <code>ipairs</code> to iterate over the story chunks, use <code>kobold.story:forward_iter()</code> or <code>kobold.story:reverse_iter()</code>, which guarantee amortized worst-case iteration time complexity linear to the number of chunks in the story regardless of what the highest chunk number is.</p>
<p>You can index this object to get a story chunk (as a <code>KoboldStoryChunk</code> object) by its number, which is an integer. The prompt chunk, if it exists, is guaranteed to be chunk 0. Aside from that, the chunk numbers are not guaranteed to be contiguous or ordered in any way.</p>
<preclass=" language-lua"><codeclass="prism language-lua"><spanclass="token keyword">local</span> prompt_chunk <spanclass="token operator">=</span> kobold<spanclass="token punctuation">.</span>story<spanclass="token punctuation">[</span><spanclass="token number">0</span><spanclass="token punctuation">]</span><spanclass="token comment">-- KoboldStoryChunk object referring to the prompt chunk</span>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>The number of the story chunk. Chunk 0 is guaranteed to be the prompt chunk if it exists; no guarantees can be made about the numbers of other chunks.</p>
<p>The user-submitted text after being formatted by input formatting. If this is a repeated generation incurred by <code>kobold.restart_generation()</code>, then this is the empty string.</p>
<p>Indexing this object at index i returns the ith world info entry from the top in amortized constant worst-case time as a <code>KoboldWorldInfoEntry</code>. This includes world info entries that are inside folders.</p>
<preclass=" language-lua"><codeclass="prism language-lua"><spanclass="token keyword">local</span> entry <spanclass="token operator">=</span> kobold<spanclass="token punctuation">.</span>worldinfo<spanclass="token punctuation">[</span><spanclass="token number">5</span><spanclass="token punctuation">]</span><spanclass="token comment">-- Retrieves fifth entry from top as a KoboldWorldInfoEntry</span>
</code></pre>
<p>You can use <code>ipairs</code> or a numeric loop to iterate from top to bottom:</p>
<p>Computes the context that would be sent to the generator with the user’s current settings if <code>submission</code> were the user’s input after being formatted by input formatting. The context would include memory at the top, followed by active world info entries, followed by some story chunks with the author’s note somewhere, followed by <code>submission</code>.</p>
<h3id="parameters-4">Parameters</h3>
<ul>
<li>submission (<code>string</code>): String to use as simulated user’s input after being formatted by input formatting.</li>
<li>entries? (<code>KoboldWorldInfoEntry|table<any, KoboldWorldInfoEntry></code>): A <code>KoboldWorldInfoEntry</code> or table thereof that indicates an allowed subset of world info entries to include in the context. Defaults to all world info entries.</li>
<li>kwargs? (<code>table<string, any></code>): Table of optional keyword arguments from the following list. Defaults to <code>{}</code>.
<ul>
<li>scan_story? (<code>boolean</code>): Whether or not to scan the past few actions of the story for world info keys in addition to the submission like how world info normally behaves. If this is set to <code>false</code>, only the <code>submission</code> is scanned for world info keys. Defaults to <code>true</code>.</li>
<li>include_anote? (<code>boolean</code>): Whether to include the author's note in the story. Defaults to <code>true</code>, pass <code>false</code> to suppress including the author's note.</li>
<p>Can be indexed in amortized constant worst-case time and iterated over and has a <code>finduid</code> method just like <code>kobold.worldinfo</code>, but gets folders (as <code>KoboldWorldInfoFolder</code> objects) instead.</p>
<preclass=" language-lua"><codeclass="prism language-lua"><spanclass="token keyword">local</span> folder <spanclass="token operator">=</span> kobold<spanclass="token punctuation">.</span>worldinfo<spanclass="token punctuation">.</span>folders<spanclass="token punctuation">[</span><spanclass="token number">5</span><spanclass="token punctuation">]</span><spanclass="token comment">-- Retrieves fifth folder from top as a KoboldWorldInfoFolder</span>
<p>The same as calling <code>kobold.worldinfo:compute_context()</code> with this world info entry as the argument.</p>
<h3id="parameters-6">Parameters</h3>
<ul>
<li>submission (<code>string</code>): String to use as simulated user’s input after being formatted by input formatting.</li>
<li>kwargs? (<code>table<string, any></code>): Table of optional keyword arguments from the following list. Defaults to <code>{}</code>.
<ul>
<li>scan_story? (<code>boolean</code>): Whether or not to scan the past few actions of the story for world info keys in addition to the submission like how world info normally behaves. If this is set to <code>false</code>, only the <code>submission</code> is scanned for world info keys. Defaults to <code>true</code>.</li>
<li>include_anote? (<code>boolean</code>): Whether to include the author's note in the story. Defaults to <code>true</code>, pass <code>false</code> to suppress including the author's note.</li>
<p>Whether or not this world info entry is constant. Constant world info entries are always included in the context regardless of whether or not its keys match the story chunks in the context.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>The text in the “What To Remember” text box that gets included in the context when the world info entry is active.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>For non-selective world info entries, this is the world info entry’s comma-separated list of keys. For selective world info entries, this is the comma-separated list of primary keys.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>For non-selective world info entries, the value of this field is undefined and writing to it has no effect. For selective world info entries, this is the comma-separated list of secondary keys.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>Whether or not the world info entry is selective. Selective entries have both primary and secondary keys.</p>
<p>Modifying this field from inside of a generation modifier triggers a regeneration, which means that the context is recomputed after modification and generation begins again with the new context and previously generated tokens. This incurs a small performance penalty and should not be performed in excess.</p>
<p>Computes the context that would be sent to the generator with the user’s current settings if <code>submission</code> were the user’s input after being formatted by input formatting. The context would include memory at the top, followed by active world info entries, followed by some story chunks with the author’s note somewhere, followed by <code>submission</code>.</p>
<p>Unlike <code>kobold.worldinfo:compute_context()</code>, this function doesn’t include world info keys outside of the folder.</p>
<h3id="parameters-8">Parameters</h3>
<ul>
<li>submission (<code>string</code>): String to use as simulated user’s input after being formatted by input formatting.</li>
<li>entries? (<code>KoboldWorldInfoEntry|table<any, KoboldWorldInfoEntry></code>): A <code>KoboldWorldInfoEntry</code> or table thereof that indicates an allowed subset of world info entries to include in the context. Entries that are not inside of the folder are still not included. Defaults to all world info entries in the folder.</li>
<li>kwargs? (<code>table<string, any></code>): Table of optional keyword arguments from the following list. Defaults to <code>{}</code>.
<ul>
<li>scan_story? (<code>boolean</code>): Whether or not to scan the past few actions of the story for world info keys in addition to the submission like how world info normally behaves. If this is set to <code>false</code>, only the <code>submission</code> is scanned for world info keys. Defaults to <code>true</code>.</li>
<li>include_anote? (<code>boolean</code>): Whether to include the author's note in the story. Defaults to <code>true</code>, pass <code>false</code> to suppress including the author's note.</li>
<p>Returns the world info entry inside of the folder with the given UID in amortized constant worst-case time, or <code>nil</code> if not found.</p>
<h3id="parameters-9">Parameters</h3>
<ul>
<li>u (<code>integer</code>): UID.</li>
</ul>
<h3id="returns-8">Returns</h3>
<ul>
<li><code>KoboldWorldInfoEntry?</code>: The world info entry with requested UID, or <code>nil</code> if no such entry exists or if it’s outside of the folder.</li>