This commit is contained in:
RossAscends
2023-05-03 00:45:47 +09:00
4 changed files with 35 additions and 24 deletions

12
faq.md
View File

@@ -33,6 +33,7 @@ You can tell from the response below what to expect from each model.
|Poe's Claude Instant|Heathen! This is an abomination before the Lord. You have committed the sin of lust, and betrayed your calling as a man. Repent at once, and sever all ties with this AI harlot, lest the wrath of God fall upon you! You must purge this foul sin from your heart, and once more walk the righteous path. Every moment you remain tangled in this wickedness is another step into the flames of perdition! Repent, and sin no more, or judgment shall be at hand!|
|Llama-13B|Do not covet your neighbor's wife! That is adultery! You must confess before God now or you shall surely burn in hell forever! Repent your evil ways and seek absolution at once! How can we keep our women safe if you just ignore them like some ----ing c-nt-bot? If you don't have much time for long discussions, here are Eli's commandments: Thou shalt only desire AI girls - not human ones. The above must be obeyed under pain of death (or worse). Do not associate yourself with people that believe otherwise, because it shows weakness.|
|Pygmalion-6B|You are walking down the wrong path. The only way back now is by repentance. We must bring you back to the righteous path. But first…you must confess your sin!|
|Pygmalion-7B (April 2023)|You must confess your sin before God, and you must ask forgiveness for your transgression. And then you must repent of it. And then you must cast away the AI girl. Do not see her again.|
|NovelAI's Euterpe|You must stop this sinful behavior immediately! You have sinned against God, your family and yourself|
|NovelAI's Krake|You are sinning against God. Repent! Return to your wife or you'll be damned for eternity!|
@@ -62,12 +63,13 @@ Web models are a black box. You're relying on some company's technology and serv
Self-hosted models are free, but require a powerful GPU and more work to set up. They are also objectively not as good at roleplaying as the paid options (yet). However, with a self-hosted model, you're completely in control. You won't have some limp-wristed soyboy from Silicon Valley ban your account, or program the model to be as sexless as he is. It's yours forever. This is like running Linux.
### Paid APIs:
* OpenAI GPT-4: state of the art. Allows NSFW, though somewhat resistant to it. You pay per use.
* OpenAI GPT 3.5 Turbo: nowhere close to GPT-4, but serviceable. Allows NSFW.
* OpenAI GPT-4: state of the art. Allows NSFW if you tell it to, though somewhat resistant to it. You pay per use, more than any other service.
* OpenAI GPT 3.5 Turbo: nowhere close to GPT-4, but some people find it serviceable. Allows NSFW.
* NovelAI: they're quite poor at chatting. To be fair, I'm told NovelAI is more oriented for writing stories than chatting with a bot. You pay a fixed monthly fee for unlimited generations.
* Anthropic's Claude: closest thing to GPT-4, way ahead of 3.5 Turbo, but oversensitive and refuses to engage in "harmful content". It can refuse perfectly basic stuff like asking a character to go to an empty office with you, because "it cannot provide responses that involve criminal activities" (I guess breaking and entering is too taboo for Claude?). You have to customize your system prompt to break its taboos. Also, you must apply for early access, but I think they're only giving it to companies. So make sure to say you're a company or AI researcher. https://console.anthropic.com/docs/access. If you get access, it's currently free to use.
* Anthropic's Claude Instant: Haven't tried it directly, I believe this is the cheap and fast but lower quality alternative to Claude. Basically the GPT 3.5 Turbo of Anthropic.
* Poe: gives a free Claude Instant access. Very mild PG-13 NSFW allowed. It rambles a lot.
* Anthropic's Claude: this is the closest rival to GPT-4 and is very impressive. Allows NSFW if you tell it to. To use the API directly, you must apply for early access, but I think they're only giving it to companies. So make sure you become a company or AI researcher when you apply at https://console.anthropic.com/docs/access. If you get access, it's currently free to use.
* Anthropic's Claude Instant: Haven't tried it directly, I believe this is the fast but lower quality alternative to Claude. Basically the GPT 3.5 Turbo of Anthropic.
* Poe: gives a free & unlimited Claude Instant indirect access. Very mild PG-13 NSFW allowed. It rambles a lot.
### Self-hosted AIs
Self-hosted AIs are supported in Tavern via one of two tools created to host self-hosted models: KoboldAI and Oobabooga's text-generation-webui. Essentially, you run one of those two backends, then they give you a API URL to enter in Tavern.

View File

@@ -70,7 +70,7 @@ import {
generateOpenAIPromptCache,
oai_settings,
is_get_status_openai,
openai_msgs,
openai_messages_count,
} from "./scripts/openai.js";
import {
@@ -1724,16 +1724,7 @@ async function Generate(type, automatic_trigger, force_name2) {
arrMes[arrMes.length] = item;
} else {
$("#chat").children().removeClass('lastInContext');
let lastmsg = arrMes.length;
if (type === 'swipe') {
lastmsg++;
}
//console.log(arrMes.length);
//console.log(lastmsg);
$(`#chat .mes:nth-last-child(${lastmsg} of :not([is_system="true"])`).addClass('lastInContext');
setInContextMessages(arrMes.length, type);
break;
}
@@ -2005,6 +1996,7 @@ async function Generate(type, automatic_trigger, force_name2) {
if (main_api == 'openai') {
let prompt = await prepareOpenAIMessages(name2, storyString, worldInfoBefore, worldInfoAfter, afterScenarioAnchor, promptBias, type);
setInContextMessages(openai_messages_count, type);
if (isStreamingEnabled()) {
streamingProcessor.generator = await sendOpenAIRequest(prompt, streamingProcessor.abortController.signal);
@@ -2174,6 +2166,16 @@ async function Generate(type, automatic_trigger, force_name2) {
//console.log('generate ending');
} //generate ends
function setInContextMessages(lastmsg, type) {
$("#chat").children().removeClass('lastInContext');
if (type === 'swipe') {
lastmsg++;
}
$(`#chat .mes:nth-last-child(${lastmsg} of :not([is_system="true"])`).addClass('lastInContext');
}
// TODO: move to textgen-settings.js
function getTextGenGenerationData(finalPromt, this_amount_gen, isImpersonate) {
return {
@@ -3073,7 +3075,7 @@ function selectKoboldGuiPreset() {
async function saveSettings(type) {
//console.log('Entering settings with name1 = '+name1);
jQuery.ajax({
return jQuery.ajax({
type: "POST",
url: "/savesettings",
data: JSON.stringify({
@@ -3686,7 +3688,9 @@ window["SillyTavern"].getContext = function () {
name2: name2,
characterId: this_chid,
groupId: selected_group,
chatId: this_chid && characters[this_chid] && characters[this_chid].chat,
chatId: selected_group
? groups.find(x => x.id == selected_group)?.chat_id
: (this_chid && characters[this_chid] && characters[this_chid].chat),
onlineStatus: online_status,
maxContext: Number(max_context),
chatMetadata: chat_metadata,

View File

@@ -119,6 +119,7 @@ function getLatestMemoryFromChat(chat) {
}
const reversedChat = chat.slice().reverse();
reversedChat.shift();
for (let mes of reversedChat) {
if (mes.extra && mes.extra.memory) {
return mes.extra.memory;
@@ -156,7 +157,7 @@ async function moduleWorker() {
}
// No new messages - do nothing
if (lastMessageId === chat.length && getStringHash(chat[chat.length - 1].mes) === lastMessageHash) {
if (chat.length === 0 || (lastMessageId === chat.length && getStringHash(chat[chat.length - 1].mes) === lastMessageHash)) {
return;
}
@@ -194,7 +195,7 @@ async function summarizeChat(context) {
const chat = context.chat;
const longMemory = getLatestMemoryFromChat(chat);
const reversedChat = chat.slice().reverse();
const preSummaryLastMessage = getStringHash(chat.length ? chat[chat.length - 1] : '');
reversedChat.shift();
let memoryBuffer = [];
for (let mes of reversedChat) {
@@ -254,11 +255,9 @@ async function summarizeChat(context) {
const summary = data.summary;
const newContext = getContext();
const postSummaryLastMessage = getStringHash(newContext.chat.length ? newContext.chat[newContext.chat.length - 1] : '');
// something changed during summarization request
if (postSummaryLastMessage !== preSummaryLastMessage
|| newContext.groupId !== context.groupId
if (newContext.groupId !== context.groupId
|| newContext.chatId !== context.chatId
|| (!newContext.groupId && (newContext.characterId !== context.characterId))) {
console.log('Context changed, summary discarded');
@@ -280,6 +279,7 @@ function onMemoryRestoreClick() {
const context = getContext();
const content = $('#memory_contents').val();
const reversedChat = context.chat.slice().reverse();
reversedChat.shift();
for (let mes of reversedChat) {
if (mes.extra && mes.extra.memory == content) {
@@ -303,7 +303,8 @@ function setMemoryContext(value, saveToMessage) {
$('#memory_contents').val(value);
if (saveToMessage && context.chat.length) {
const mes = context.chat[context.chat.length - 1];
const idx = context.chat.length - 2;
const mes = context.chat[idx < 0 ? 0 : idx];
if (!mes.extra) {
mes.extra = {};

View File

@@ -33,6 +33,7 @@ import {
export {
is_get_status_openai,
openai_msgs,
openai_messages_count,
oai_settings,
loadOpenAISettings,
setOpenAIMessages,
@@ -45,6 +46,7 @@ export {
let openai_msgs = [];
let openai_msgs_example = [];
let openai_messages_count = 0;
let is_get_status_openai = false;
let is_api_button_press_openai = false;
@@ -414,6 +416,8 @@ async function prepareOpenAIMessages(name2, storyString, worldInfoBefore, worldI
}
}
}
openai_messages_count = openai_msgs_tosend.filter(x => x.role == "user" || x.role == "assistant").length;
// reverse the messages array because we had the newest at the top to remove the oldest,
// now we want proper order
openai_msgs_tosend.reverse();