Merge pull request #8 from henk717/united

Merge united (including UI2)
This commit is contained in:
Llama
2022-12-22 10:03:35 -08:00
committed by GitHub
71 changed files with 26859 additions and 3384 deletions

30
.github/workflows/docker-image.yml vendored Normal file
View File

@@ -0,0 +1,30 @@
name: Build Docker
on:
schedule:
- cron: "0 0 * * *"
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout@v3
-
name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
-
name: Build and push
uses: docker/build-push-action@v3
with:
context: .
file: ./docker-standalone/Dockerfile.koboldai
push: true
tags: ${{ secrets.DOCKER_HUB_USERNAME }}/koboldai:nightly

13
.gitignore vendored
View File

@@ -23,12 +23,19 @@ userscripts
!userscripts/api_documentation.*
softprompts
models
functional_models
!models/models go here.txt
Uninstall
flask_session
accelerate-disk-cache
.ipynb_checkpoints
# Temporary until HF port
!models/RWKV-v4
models/RWKV-v4/20B_tokenizer.json
models/RWKV-v4/src/__pycache__
models/RWKV-v4/models
# Ignore PyCharm project files.
.idea
@@ -36,4 +43,8 @@ accelerate-disk-cache
*.pyc
# Don't ignore defaults
!defaults/*
!defaults/*
flask_session/1074228e7055acfb7de9d07a471d0b92
.gitignore
flask_session/2029240f6d1128be89ddc32729463129
flask_session

6
.gitmodules vendored Normal file
View File

@@ -0,0 +1,6 @@
[submodule "KoboldAI-Horde"]
path = KoboldAI-Horde
url = https://github.com/db0/KoboldAI-Horde-Bridge
[submodule "KoboldAI-Horde-Bridge"]
path = KoboldAI-Horde-Bridge
url = https://github.com/db0/KoboldAI-Horde-Bridge

1
KoboldAI-Horde-Bridge Submodule

Submodule KoboldAI-Horde-Bridge added at 8fd65b60e9

View File

@@ -105,7 +105,7 @@ KoboldAI has a large number of dependencies you will need to install on your com
### Downloading the latest version of KoboldAI
KoboldAI is a rolling release on our github, the code you see is also the game. You can the software by clicking on the green Code button at the top of the page and clicking Download ZIP.
KoboldAI is a rolling release on our github, the code you see is also the game. You can download the software by clicking on the green Code button at the top of the page and clicking Download ZIP, or use the `git clone` command instead. Then, on Windows you need to you run install_requirements.bat (using admin mode is recommanded to avoid errors), and once it's done, or if you're on Linux, either play.bat/sh or remote-play.bat/sh to run it.
The easiest way for Windows users is to use the [offline installer](https://sourceforge.net/projects/koboldai/files/latest/download) below.
@@ -228,4 +228,4 @@ Did we miss your contribution? Feel free to issue a commit adding your name to t
KoboldAI is licensed with a AGPL license, in short this means that it can be used by anyone for any purpose. However, if you decide to make a publicly available instance your users are entitled to a copy of the source code including all modifications that you have made (which needs to be available trough an interface such as a button on your website), you may also not distribute this project in a form that does not contain the source code (Such as compiling / encrypting the code and distributing this version without also distributing the source code that includes the changes that you made. You are allowed to distribute this in a closed form if you also provide a separate archive with the source code.).
umamba.exe is bundled for convenience because we observed that many of our users had trouble with command line download methods, it is not part of our project and does not fall under the AGPL license. It is licensed under the BSD-3-Clause license. Other files with differing licenses will have a reference or embedded version of this license within the file. It has been sourced from https://anaconda.org/conda-forge/micromamba/files and its source code can be found here : https://github.com/mamba-org/mamba/tree/master/micromamba
umamba.exe is bundled for convenience because we observed that many of our users had trouble with command line download methods, it is not part of our project and does not fall under the AGPL license. It is licensed under the BSD-3-Clause license. Other files with differing licenses will have a reference or embedded version of this license within the file. It has been sourced from https://anaconda.org/conda-forge/micromamba/files and its source code can be found here : https://github.com/mamba-org/mamba/tree/master/micromamba

204
RWKV4/LICENSE.txt Normal file
View File

@@ -0,0 +1,204 @@
Code in this directory is taken from https://github.com/BlinkDL/RWKV-LM.
The license for this code is as follows:
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

125
RWKV4/cuda/wkv_cuda.cu Normal file
View File

@@ -0,0 +1,125 @@
#include <stdio.h>
#include <assert.h>
#define MIN_VALUE (-1e38)
template <typename F>
__global__ void kernel_forward(const int B, const int T, const int C,
const F *__restrict__ const _w, const F *__restrict__ const _u, const F *__restrict__ const _k, const F *__restrict__ const _v,
F *__restrict__ const _y) {
const int idx = blockIdx.x * blockDim.x + threadIdx.x;
const int _b = idx / C;
const int _c = idx % C;
const int _offset = _b * T * C + _c;
F u = _u[_c];
F w = _w[_c];
const F *__restrict__ const k = _k + _offset;
const F *__restrict__ const v = _v + _offset;
F *__restrict__ const y = _y + _offset;
F p = 0, q = 0, o = MIN_VALUE;
// p and q are running sums divided by exp(o) (to avoid overflows)
for (int i = 0; i < T; i++) {
const int ii = i * C;
F no = max(o, u + k[ii]);
F A = exp(o - no);
F B = exp(u + k[ii] - no);
y[ii] = (A * p + B * v[ii]) / (A * q + B);
no = max(w + o, k[ii]);
A = exp(w + o - no);
B = exp(k[ii] - no);
p = A * p + B * v[ii];
q = A * q + B;
o = no;
}
}
template <typename F>
__global__ void kernel_backward(const int B, const int T, const int C,
const F *__restrict__ const _w, const F *__restrict__ const _u, const F *__restrict__ const _k, const F *__restrict__ const _v, const F *__restrict__ const _gy,
F *__restrict__ const _gw, F *__restrict__ const _gu, F *__restrict__ const _gk, F *__restrict__ const _gv) {
const int idx = blockIdx.x * blockDim.x + threadIdx.x;
const int _b = idx / C;
const int _c = idx % C;
const int _offset = _b * T * C + _c;
F u = _u[_c];
F w = _w[_c];
const F *__restrict__ const k = _k + _offset;
const F *__restrict__ const v = _v + _offset;
const F *__restrict__ const gy = _gy + _offset;
F *__restrict__ const gk = _gk + _offset;
F *__restrict__ const gv = _gv + _offset;
F y[Tmax], z[Tmax], zexp[Tmax];
F gw = 0, gu = 0;
F p = 0, q = 0;
F dpdw = 0, dqdw = 0;
F o = MIN_VALUE;
for (int i = 0; i < T; i++) {
const int ii = i * C;
F no = max(o, k[ii] + u);
F A = exp(o - no);
F B = exp(k[ii] + u - no);
F num = A * p + B * v[ii];
F iden = 1 / (A * q + B);
y[i] = num * iden;
z[i] = iden;
zexp[i] = k[ii] + u - no;
gw += gy[ii] * (dpdw - dqdw * y[i]) * iden * A;
gu += gy[ii] * (v[ii] - y[i]) * B * iden;
no = max(w + o, k[ii]);
A = exp(w + o - no);
B = exp(k[ii] - no);
dpdw = A * (p + dpdw);
dqdw = A * (q + dqdw);
p = A * p + B * v[ii];
q = A * q + B;
o = no;
}
F gp = 0, gq = 0;
o = MIN_VALUE;
for (int i = T - 1; i >= 0; i--) {
const int ii = i * C;
F A = gy[ii] * z[i] * exp(zexp[i]);
F B = exp(k[ii] + o);
gk[ii] = A * (v[ii] - y[i]) + B * (gp * v[ii] + gq);
gv[ii] = A + B * gp;
F no = max(w + o, zexp[i] - k[ii] - u);
A = exp(w + o - no);
B = gy[ii] * z[i] * exp(zexp[i] - k[ii] - u - no);
gp = A * gp + B;
gq = A * gq - B * y[i];
o = no;
}
// Multiply by w because the w -> -exp(w) preprocessing is halfway in the backwards pass, even though it's not in the forward pass
const int _offsetBC = _b * C + _c;
_gw[_offsetBC] += gw * _w[_c];
_gu[_offsetBC] += gu;
}
void cuda_forward(int B, int T, int C, float *w, float *u, float *k, float *v, float *y) {
dim3 threadsPerBlock( min(C, 32) ); // requires --maxrregcount 60 for optimal performance
assert(B * C % threadsPerBlock.x == 0);
dim3 numBlocks(B * C / threadsPerBlock.x);
kernel_forward<<<numBlocks, threadsPerBlock>>>(B, T, C, w, u, k, v, y);
}
void cuda_backward(int B, int T, int C, float *w, float *u, float *k, float *v, float *gy, float *gw, float *gu, float *gk, float *gv) {
dim3 threadsPerBlock( min(C, 32) ); // requires --maxrregcount 60 for optimal performance
assert(B * C % threadsPerBlock.x == 0);
dim3 numBlocks(B * C / threadsPerBlock.x);
kernel_backward<<<numBlocks, threadsPerBlock>>>(B, T, C, w, u, k, v, gy, gw, gu, gk, gv);
}

21
RWKV4/cuda/wkv_op.cpp Normal file
View File

@@ -0,0 +1,21 @@
#include <torch/extension.h>
void cuda_forward(int B, int T, int C, float *w, float *u, float *k, float *v, float *y);
void cuda_backward(int B, int T, int C, float *w, float *u, float *k, float *v, float *gy, float *gw, float *gu, float *gk, float *gv);
void forward(int64_t B, int64_t T, int64_t C, torch::Tensor &w, torch::Tensor &u, torch::Tensor &k, torch::Tensor &v, torch::Tensor &y) {
cuda_forward(B, T, C, w.data_ptr<float>(), u.data_ptr<float>(), k.data_ptr<float>(), v.data_ptr<float>(), y.data_ptr<float>());
}
void backward(int64_t B, int64_t T, int64_t C, torch::Tensor &w, torch::Tensor &u, torch::Tensor &k, torch::Tensor &v, torch::Tensor &gy, torch::Tensor &gw, torch::Tensor &gu, torch::Tensor &gk, torch::Tensor &gv) {
cuda_backward(B, T, C, w.data_ptr<float>(), u.data_ptr<float>(), k.data_ptr<float>(), v.data_ptr<float>(), gy.data_ptr<float>(), gw.data_ptr<float>(), gu.data_ptr<float>(), gk.data_ptr<float>(), gv.data_ptr<float>());
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("forward", &forward, "wkv forward");
m.def("backward", &backward, "wkv backward");
}
TORCH_LIBRARY(wkv, m) {
m.def("forward", forward);
m.def("backward", backward);
}

416
RWKV4/src/model.py Normal file
View File

@@ -0,0 +1,416 @@
# File from RWKV-v4 Repo - Small changes made for compatibility
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import math, os
import numpy as np
import logging
import torch
import torch.nn as nn
from torch.nn import functional as F
try:
from deepspeed.ops.adam import FusedAdam
except:
pass # some poor windows users cant install deepspeed
logger = logging.getLogger(__name__)
RWKV_HEAD_QK_DIM = 0
print(f'\nRWKV_HEAD_QK_DIM {RWKV_HEAD_QK_DIM}\n')
class L2Wrap(torch.autograd.Function):
@staticmethod
def forward(ctx, loss, y):
ctx.save_for_backward(y)
return loss
@staticmethod
def backward(ctx, grad_output):
y = ctx.saved_tensors[0]
# to encourage the logits to be close to 0
factor = 1e-4 / (y.shape[0] * y.shape[1])
maxx, ids = torch.max(y, -1, keepdim=True)
gy = torch.zeros_like(y)
gy.scatter_(-1, ids, maxx * factor)
return (grad_output, gy)
########################################################################################################
# CUDA Kernel
########################################################################################################
T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]
# it's possible to go beyond CUDA limitations if you slice the ctx and pass the hidden state in each slice
from torch.utils.cpp_extension import load
wkv_cuda = load(name="wkv", sources=["RWKV4/cuda/wkv_op.cpp", "RWKV4/cuda/wkv_cuda.cu"],
verbose=True, extra_cuda_cflags=['-res-usage', '--maxrregcount 60', '--use_fast_math', '-O3', '-Xptxas -O3', f'-DTmax={T_MAX}'])
class WKV(torch.autograd.Function):
@staticmethod
def forward(ctx, B, T, C, w, u, k, v):
ctx.B = B
ctx.T = T
ctx.C = C
assert T <= T_MAX
assert B * C % min(C, 1024) == 0
if '32' in os.environ['RWKV_FLOAT_MODE']:
w = -torch.exp(w.contiguous())
u = u.contiguous()
k = k.contiguous()
v = v.contiguous()
else:
w = -torch.exp(w.float().contiguous())
u = u.float().contiguous()
k = k.float().contiguous()
v = v.float().contiguous()
ctx.save_for_backward(w, u, k, v)
y = torch.empty((B, T, C), device='cuda', memory_format=torch.contiguous_format)
wkv_cuda.forward(B, T, C, w, u, k, v, y)
if '32' in os.environ['RWKV_FLOAT_MODE']:
return y
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
return y.half()
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
return y.bfloat16()
@staticmethod
def backward(ctx, gy):
B = ctx.B
T = ctx.T
C = ctx.C
assert T <= T_MAX
assert B * C % min(C, 1024) == 0
w, u, k, v = ctx.saved_tensors
gw = torch.zeros((B, C), device='cuda').contiguous()
gu = torch.zeros((B, C), device='cuda').contiguous()
gk = torch.zeros((B, T, C), device='cuda').contiguous()
gv = torch.zeros((B, T, C), device='cuda').contiguous()
if '32' in os.environ['RWKV_FLOAT_MODE']:
wkv_cuda.backward(B, T, C, w, u, k, v, gy.contiguous(), gw, gu, gk, gv)
else:
wkv_cuda.backward(B, T, C, w, u, k, v, gy.float().contiguous(), gw, gu, gk, gv)
gw = torch.sum(gw, dim=0)
gu = torch.sum(gu, dim=0)
if '32' in os.environ['RWKV_FLOAT_MODE']:
return (None, None, None, gw, gu, gk, gv)
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
return (None, None, None, gw.half(), gu.half(), gk.half(), gv.half())
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
return (None, None, None, gw.bfloat16(), gu.bfloat16(), gk.bfloat16(), gv.bfloat16())
def RUN_CUDA(B, T, C, w, u, k, v):
return WKV.apply(B, T, C, w.cuda(), u.cuda(), k.cuda(), v.cuda())
########################################################################################################
# RWKV: RWKV Time-mix + RWKV Channel-mix
########################################################################################################
def RWKV_Init(model, args): # fancy initialization of all lin & emb layer in the model
print("\n[--> first run, init model params (very slow for large models) <--]")
print("[so you shall only do it for 1 single GPU and save the checkpt and load it when using multiple GPU]\n")
for mm in model.modules():
if "RecursiveScriptModule" in str(type(mm)):
if mm.original_name not in ["Linear"]:
continue
ww = None
for name, param in mm.named_parameters():
if name == "weight":
ww = param
else:
m = mm
if not isinstance(m, (nn.Linear, nn.Embedding)):
continue
ww = m.weight
with torch.no_grad():
name = "[unknown weight]"
for name, parameter in model.named_parameters(): # find the name of the weight
if id(ww) == id(parameter):
break
shape = ww.shape
gain = 1.0
scale = 1.0 # extra scale for gain
if isinstance(m, nn.Embedding):
gain = math.sqrt(max(shape[0], shape[1]))
if shape[0] == args.vocab_size and shape[1] == args.n_embd: # token emb?
scale = 1e-4
else:
scale = 0
if isinstance(m, nn.Linear):
if shape[0] > shape[1]:
gain = math.sqrt(shape[0] / shape[1])
if shape[0] == args.vocab_size and shape[1] == args.n_embd: # final projection?
scale = 0.5
if hasattr(m, "scale_init"):
scale = m.scale_init
# print(f"{str(shape[0]).ljust(5)} {str(shape[1]).ljust(5)} {str(scale).ljust(4)} {name}")
gain *= scale
if scale == -999:
nn.init.eye_(ww)
elif gain == 0:
# zero init is great for some RWKV matrices
nn.init.zeros_(ww)
elif gain > 0:
nn.init.orthogonal_(ww, gain=gain)
else:
nn.init.normal_(ww, mean=0.0, std=-scale)
class RWKV_TimeMix(torch.jit.ScriptModule):
def __init__(self, config, layer_id):
super().__init__()
self.layer_id = layer_id
self.ctx_len = config.ctx_len
self.n_embd = config.n_embd
attn_sz = config.n_embd
with torch.no_grad(): # fancy init
ratio_0_to_1 = (layer_id / (config.n_layer - 1)) # 0 to 1
ratio_1_to_almost0 = (1.0 - (layer_id / config.n_layer)) # 1 to ~0
# fancy time_decay
decay_speed = torch.ones(attn_sz)
for h in range(attn_sz):
decay_speed[h] = -5 + 8 * (h / (attn_sz-1)) ** (0.7 + 1.3 * ratio_0_to_1)
self.time_decay = nn.Parameter(decay_speed)
# print(layer_id, self.time_decay.flatten()[:3].cpu().numpy(), '...', self.time_decay.flatten()[-3:].cpu().numpy())
# fancy time_first
zigzag = (torch.tensor([(i+1)%3 - 1 for i in range(attn_sz)]) * 0.5)
self.time_first = nn.Parameter(torch.ones(attn_sz) * math.log(0.3) + zigzag)
# fancy time_mix
x = torch.ones(1, 1, config.n_embd)
for i in range(config.n_embd):
x[0, 0, i] = i / config.n_embd
self.time_mix_k = nn.Parameter(torch.pow(x, ratio_1_to_almost0))
self.time_mix_v = nn.Parameter(torch.pow(x, ratio_1_to_almost0) + 0.3 * ratio_0_to_1)
self.time_mix_r = nn.Parameter(torch.pow(x, 0.5 * ratio_1_to_almost0))
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
self.key = nn.Linear(config.n_embd, attn_sz, bias=False)
self.value = nn.Linear(config.n_embd, attn_sz, bias=False)
self.receptance = nn.Linear(config.n_embd, attn_sz, bias=False)
self.output = nn.Linear(attn_sz, config.n_embd, bias=False)
self.key.scale_init = 0
self.receptance.scale_init = 0
self.output.scale_init = 0
@torch.jit.script_method
def jit_func(self, x):
# Mix x with the previous timestep to produce xk, xv, xr
xx = self.time_shift(x)
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
xv = x * self.time_mix_v + xx * (1 - self.time_mix_v)
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
# Use xk, xv, xr to produce k, v, r
k = self.key(xk)
v = self.value(xv)
r = self.receptance(xr)
sr = torch.sigmoid(r)
return sr, k, v
def forward(self, x):
B, T, C = x.size() # x = (Batch,Time,Channel)
sr, k, v = self.jit_func(x)
rwkv = sr * RUN_CUDA(B, T, C, self.time_decay, self.time_first, k, v)
rwkv = self.output(rwkv)
return rwkv
class RWKV_ChannelMix(torch.jit.ScriptModule):
def __init__(self, config, layer_id):
super().__init__()
self.layer_id = layer_id
self.time_shift = nn.ZeroPad2d((0, 0, 1, -1))
with torch.no_grad(): # fancy init of time_mix
ratio_1_to_almost0 = (1.0 - (layer_id / config.n_layer)) # 1 to ~0
x = torch.ones(1, 1, config.n_embd)
for i in range(config.n_embd):
x[0, 0, i] = i / config.n_embd
self.time_mix_k = nn.Parameter(torch.pow(x, ratio_1_to_almost0))
self.time_mix_r = nn.Parameter(torch.pow(x, ratio_1_to_almost0))
hidden_sz = 4 * config.n_embd
self.key = nn.Linear(config.n_embd, hidden_sz, bias=False)
self.receptance = nn.Linear(config.n_embd, config.n_embd, bias=False)
self.value = nn.Linear(hidden_sz, config.n_embd, bias=False)
self.value.scale_init = 0
self.receptance.scale_init = 0
@torch.jit.script_method
def forward(self, x):
xx = self.time_shift(x)
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
k = self.key(xk)
k = torch.square(torch.relu(k))
kv = self.value(k)
rkv = torch.sigmoid(self.receptance(xr)) * kv
return rkv
########################################################################################################
# The GPT Model with our blocks
########################################################################################################
class GPTConfig:
def __init__(self, vocab_size, ctx_len, **kwargs):
self.vocab_size = vocab_size
self.ctx_len = ctx_len
for k, v in kwargs.items():
setattr(self, k, v)
class Block(nn.Module):
def __init__(self, config, layer_id):
super().__init__()
self.config = config
self.layer_id = layer_id
self.ln1 = nn.LayerNorm(config.n_embd)
self.ln2 = nn.LayerNorm(config.n_embd)
if self.layer_id == 0:
self.ln0 = nn.LayerNorm(config.n_embd)
if self.layer_id == 0 and self.config.model_type == 'RWKV-ffnPre':
self.ffnPre = RWKV_ChannelMix(config, 0)
else:
self.att = RWKV_TimeMix(config, layer_id)
self.ffn = RWKV_ChannelMix(config, layer_id)
def forward(self, x):
if self.layer_id == 0:
x = self.ln0(x)
if self.layer_id == 0 and self.config.model_type == 'RWKV-ffnPre':
x = x + self.ffnPre(self.ln1(x)) # better in some cases
else:
x = x + self.att(self.ln1(x))
x = x + self.ffn(self.ln2(x))
return x
class GPT(nn.Module):
def __init__(self, config):
super().__init__()
self.step = 0
self.config = config
self.emb = nn.Embedding(config.vocab_size, config.n_embd)
self.blocks = nn.Sequential(*[Block(config, i)
for i in range(config.n_layer)])
self.ln_out = nn.LayerNorm(config.n_embd)
self.head = nn.Linear(config.n_embd, config.vocab_size, bias=False)
if RWKV_HEAD_QK_DIM > 0:
self.head_q = nn.Linear(config.n_embd, RWKV_HEAD_QK_DIM, bias=False)
self.head_q.scale_init = 0
self.head_k = nn.Linear(config.n_embd, RWKV_HEAD_QK_DIM, bias=False)
self.head_k.scale_init = 0.1
self.register_buffer("copy_mask", torch.tril(
torch.ones(config.ctx_len, config.ctx_len)))
self.ctx_len = config.ctx_len
try:
if os.environ['RWKV_LOAD_MODEL'] == str(False):
RWKV_Init(self, config)
except:
pass
logger.info("number of parameters: %e", sum(p.numel()
for p in self.parameters()))
def get_ctx_len(self):
return self.ctx_len
def _init_weights(self, module):
if isinstance(module, (nn.Linear)):
module.weight.data.normal_(mean=0.0, std=0.01)
if isinstance(module, (nn.Embedding)):
module.weight.data.normal_(mean=0.0, std=1e-5)
if isinstance(module, nn.Linear) and module.bias is not None:
module.bias.data.zero_()
def configure_optimizers(self, train_config):
no_decay = set()
for mn, m in self.named_modules(): # here we disable weight_decay
for pn, p in m.named_parameters():
fpn = '%s.%s' % (mn, pn) if mn else pn # full param name
no_decay.add(fpn)
param_dict = {pn: p for pn, p in self.named_parameters()}
optim_groups = [
{"params": [param_dict[pn]
for pn in sorted(list(no_decay))], "weight_decay": 0.0},
]
try:
optimizer = FusedAdam(optim_groups, lr=train_config.learning_rate, betas=train_config.betas, eps=train_config.eps, bias_correction=True, adam_w_mode=False, weight_decay=0, amsgrad=False)
except:
print('\n\nDeepSpeed not found. Using torch optimizer instead (probably slower)\n\n')
optimizer = torch.optim.Adam(optim_groups, lr=train_config.learning_rate, betas=train_config.betas, eps=train_config.eps)
return optimizer
def forward(self, idx, targets=None):
idx = idx.to(self.emb.weight.device)
self.step += 1
B, T = idx.size()
assert T <= self.ctx_len, "Cannot forward, because len(input) > model ctx_len."
x = self.emb(idx)
x = self.blocks(x)
x = self.ln_out(x)
if RWKV_HEAD_QK_DIM > 0:
q = self.head_q(x)[:, :T, :]
k = self.head_k(x)[:, :T, :]
c = (q @ k.transpose(-2, -1)) * (1.0 / RWKV_HEAD_QK_DIM)
c = c.masked_fill(self.copy_mask[:T, :T] == 0, 0)
if '32' in os.environ['RWKV_FLOAT_MODE']:
c = c @ F.one_hot(idx, num_classes=self.config.vocab_size)
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
c = c @ F.one_hot(idx, num_classes=self.config.vocab_size).half()
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
c = c @ F.one_hot(idx, num_classes=self.config.vocab_size).bfloat16()
x = self.head(x) + c
else:
x = self.head(x)
loss = None
if targets is not None:
loss = F.cross_entropy(x.view(-1, x.size(-1)), targets.to(x.device).view(-1))
return L2Wrap.apply(loss, x)

392
RWKV4/src/model_run.py Normal file
View File

@@ -0,0 +1,392 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import types
import copy
import torch
import math, os
from torch.nn import functional as F
import torch.nn as nn
RWKV_HEAD_QK_DIM = 0
print(f'\nRWKV_HEAD_QK_DIM {RWKV_HEAD_QK_DIM}\n')
DEBUG_TIME = False # True False - show trained time-coeffs
########################################################################################################
# CUDA Kernel
########################################################################################################
if os.environ['RWKV_RUN_DEVICE'] == 'cuda':
T_MAX = 1024 # increase this if your ctx_len is long [NOTE: TAKES LOTS OF VRAM!]
# it's possible to go beyond CUDA limitations if you slice the ctx and pass the hidden state in each slice
from torch.utils.cpp_extension import load
wkv_cuda = load(name="wkv", sources=["RWKV4/cuda/wkv_op.cpp", "RWKV4/cuda/wkv_cuda.cu"],
verbose=True, extra_cuda_cflags=['-res-usage', '--maxrregcount 60', '--use_fast_math', '-O3', '-Xptxas -O3', f'-DTmax={T_MAX}'])
class WKV(torch.autograd.Function):
@staticmethod
def forward(ctx, B, T, C, w, u, k, v):
ctx.B = B
ctx.T = T
ctx.C = C
assert T <= T_MAX
assert B * C % min(C, 1024) == 0
if '32' in os.environ['RWKV_FLOAT_MODE']:
w = -torch.exp(w.contiguous())
u = u.contiguous()
k = k.contiguous()
v = v.contiguous()
else:
w = -torch.exp(w.float().contiguous())
u = u.float().contiguous()
k = k.float().contiguous()
v = v.float().contiguous()
ctx.save_for_backward(w, u, k, v)
y = torch.empty((B, T, C), device='cuda', memory_format=torch.contiguous_format)
wkv_cuda.forward(B, T, C, w, u, k, v, y)
if '32' in os.environ['RWKV_FLOAT_MODE']:
return y
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
return y.half()
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
return y.bfloat16()
@staticmethod
def backward(ctx, gy):
B = ctx.B
T = ctx.T
C = ctx.C
assert T <= T_MAX
assert B * C % min(C, 1024) == 0
w, u, k, v = ctx.saved_tensors
gw = torch.zeros((B, C), device='cuda').contiguous()
gu = torch.zeros((B, C), device='cuda').contiguous()
gk = torch.zeros((B, T, C), device='cuda').contiguous()
gv = torch.zeros((B, T, C), device='cuda').contiguous()
if '32' in os.environ['RWKV_FLOAT_MODE']:
wkv_cuda.backward(B, T, C, w, u, k, v, gy.contiguous(), gw, gu, gk, gv)
else:
wkv_cuda.backward(B, T, C, w, u, k, v, gy.float().contiguous(), gw, gu, gk, gv)
gw = torch.sum(gw, dim=0)
gu = torch.sum(gu, dim=0)
if '32' in os.environ['RWKV_FLOAT_MODE']:
return (None, None, None, gw, gu, gk, gv)
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
return (None, None, None, gw.half(), gu.half(), gk.half(), gv.half())
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
return (None, None, None, gw.bfloat16(), gu.bfloat16(), gk.bfloat16(), gv.bfloat16())
def RUN_CUDA(B, T, C, w, u, k, v):
return WKV.apply(B, T, C, w.cuda(), u.cuda(), k.cuda(), v.cuda())
############################################################################################################
RWKV_CFG = types.SimpleNamespace()
class RWKV_ChannelMix(nn.Module):
def __init__(self, layer_id):
super().__init__()
self.layer_id = layer_id
self.time_shift = nn.ZeroPad2d((0,0,1,-1))
self.time_mix_k = nn.Parameter(torch.ones(1, 1, RWKV_CFG.n_embd))
self.time_mix_r = nn.Parameter(torch.ones(1, 1, RWKV_CFG.n_embd))
hidden_sz = 4 * RWKV_CFG.n_embd
self.key = nn.Linear(RWKV_CFG.n_embd, hidden_sz, bias=False)
self.receptance = nn.Linear(RWKV_CFG.n_embd, RWKV_CFG.n_embd, bias=False)
self.value = nn.Linear(hidden_sz, RWKV_CFG.n_embd, bias=False)
def forward(self, x):
xx = self.time_shift(x)
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
k = self.key(xk)
k = torch.square(torch.relu(k))
kv = self.value(k)
rkv = torch.sigmoid(self.receptance(xr)) * kv
return rkv
class RWKV_TimeMix(nn.Module):
def __init__(self, layer_id):
super().__init__()
self.layer_id = layer_id
self.time_decay = nn.Parameter(torch.ones(RWKV_CFG.n_embd))
self.time_first = nn.Parameter(torch.ones(RWKV_CFG.n_embd) * math.log(0.3))
self.time_shift = nn.ZeroPad2d((0,0,1,-1))
self.time_mix_k = nn.Parameter(torch.ones(1,1,RWKV_CFG.n_embd))
self.time_mix_v = nn.Parameter(torch.ones(1,1,RWKV_CFG.n_embd))
self.time_mix_r = nn.Parameter(torch.ones(1,1,RWKV_CFG.n_embd))
self.key = nn.Linear(RWKV_CFG.n_embd, RWKV_CFG.n_embd, bias=False)
self.value = nn.Linear(RWKV_CFG.n_embd, RWKV_CFG.n_embd, bias=False)
self.receptance = nn.Linear(RWKV_CFG.n_embd, RWKV_CFG.n_embd, bias=False)
self.output = nn.Linear(RWKV_CFG.n_embd, RWKV_CFG.n_embd, bias=False)
def forward(self, x):
B, T, C = x.size()
xx = self.time_shift(x)
xk = x * self.time_mix_k + xx * (1 - self.time_mix_k)
xv = x * self.time_mix_v + xx * (1 - self.time_mix_v)
xr = x * self.time_mix_r + xx * (1 - self.time_mix_r)
k = self.key(xk)
v = self.value(xv)
r = self.receptance(xr)
rwkv = torch.sigmoid(r) * RUN_CUDA(B, T, C, self.time_decay, self.time_first, k, v)
rwkv = self.output(rwkv)
return rwkv
class Block(nn.Module):
def __init__(self, layer_id):
super().__init__()
self.layer_id = layer_id
self.ln1 = nn.LayerNorm(RWKV_CFG.n_embd)
self.ln2 = nn.LayerNorm(RWKV_CFG.n_embd)
if self.layer_id == 0:
self.ln0 = nn.LayerNorm(RWKV_CFG.n_embd)
if self.layer_id == 0 and RWKV_CFG.model_type == 'RWKV-ffnPre':
self.ffnPre = RWKV_ChannelMix(layer_id+1000)
else:
self.att = RWKV_TimeMix(layer_id)
self.ffn = RWKV_ChannelMix(layer_id)
def forward(self, x):
if self.layer_id == 0:
x = self.ln0(x)
if self.layer_id == 0 and RWKV_CFG.model_type == 'RWKV-ffnPre':
x = x + self.ffnPre(self.ln1(x))
else:
x = x + self.att(self.ln1(x))
x = x + self.ffn(self.ln2(x))
return x
class RWKV_GPT(nn.Module):
def __init__(self, MODEL_NAME, RUN_DEVICE, model_type, vocab_size, n_layer, n_embd, ctx_len):
global RWKV_CFG
super().__init__()
RWKV_CFG.RUN_DEVICE = RUN_DEVICE
RWKV_CFG.model_type = model_type
RWKV_CFG.vocab_size = vocab_size
RWKV_CFG.n_layer = n_layer
RWKV_CFG.n_embd = n_embd
RWKV_CFG.ctx_len = ctx_len
print('\nloading RWKV-GPT', MODEL_NAME)
self.emb = nn.Embedding(vocab_size, n_embd)
self.blocks = nn.Sequential(*[Block(i) for i in range(n_layer)])
self.ln_out = nn.LayerNorm(n_embd)
self.head = nn.Linear(n_embd, vocab_size, bias=False)
if RWKV_HEAD_QK_DIM > 0:
self.head_q = nn.Linear(n_embd, RWKV_HEAD_QK_DIM, bias=False)
self.head_q.scale_init = 0
self.head_k = nn.Linear(n_embd, RWKV_HEAD_QK_DIM, bias=False)
self.head_k.scale_init = 0.1
self.register_buffer("copy_mask", torch.tril(
torch.ones(ctx_len, ctx_len)))
self.ctx_len = ctx_len
self.eval()
self.load_state_dict(torch.load(MODEL_NAME + '.pth'))
self.eval()
def forward(self, idx):
B, T = idx.size()
assert T <= self.ctx_len, "Cannot forward, because len(input) > model ctx_len."
x = self.emb(idx)
x = self.blocks(x)
x = self.ln_out(x)
if RWKV_HEAD_QK_DIM > 0:
q = self.head_q(x)[:, :T, :]
k = self.head_k(x)[:, :T, :]
c = (q @ k.transpose(-2, -1)) * (1.0 / RWKV_HEAD_QK_DIM)
c = c.masked_fill(self.copy_mask[:T, :T] == 0, 0)
if '32' in os.environ['RWKV_FLOAT_MODE']:
c = c @ F.one_hot(idx, num_classes=RWKV_CFG.vocab_size)
elif os.environ['RWKV_FLOAT_MODE'] == 'fp16':
c = c @ F.one_hot(idx, num_classes=RWKV_CFG.vocab_size).half()
elif os.environ['RWKV_FLOAT_MODE'] == 'bf16':
c = c @ F.one_hot(idx, num_classes=RWKV_CFG.vocab_size).bfloat16()
x = self.head(x) + c
else:
x = self.head(x)
return x
############################################################################################################
class RWKV_RNN(): # this is running in FP32 at this moment
def __init__(self, MODEL_NAME, RUN_DEVICE, model_type, n_layer, n_embd, ctx_len):
self.RUN_DEVICE = RUN_DEVICE
self.model_type = model_type
self.n_layer = n_layer
self.n_embd = n_embd
self.ctx_len = ctx_len
self.w = types.SimpleNamespace()
w = torch.load(MODEL_NAME + '.pth',
map_location=torch.device(RUN_DEVICE))
for x in w.keys():
w[x] = w[x].float()
if '.time_' in x:
w[x] = w[x].squeeze()
if '.time_decay' in x:
w[x] = -torch.exp(w[x])
if DEBUG_TIME and '.time_' in x:
print(x, w[x].squeeze().cpu().numpy())
xx = x.split('.')
here = self.w
for i in range(len(xx)):
if xx[i].isdigit():
ii = int(xx[i])
if ii not in here:
here[ii] = types.SimpleNamespace()
here = here[ii]
else:
if i == len(xx) - 1:
setattr(here, xx[i], w[x])
elif not hasattr(here, xx[i]):
if xx[i+1].isdigit():
setattr(here, xx[i], {})
else:
setattr(here, xx[i], types.SimpleNamespace())
here = getattr(here, xx[i])
self.clear()
def clear(self):
self.xx = {}
self.aa = {}
self.bb = {}
self.pp = {}
self.hk = None
def save(self, target):
target.xx = copy.deepcopy(self.xx)
target.aa = copy.deepcopy(self.aa)
target.bb = copy.deepcopy(self.bb)
target.pp = copy.deepcopy(self.pp)
target.hk = copy.deepcopy(self.hk)
def load(self, target):
self.xx = copy.deepcopy(target.xx)
self.aa = copy.deepcopy(target.aa)
self.bb = copy.deepcopy(target.bb)
self.pp = copy.deepcopy(target.pp)
self.hk = copy.deepcopy(target.hk)
def LN(self, xx, w):
return F.layer_norm(xx, (self.n_embd,), weight=w.weight, bias=w.bias)
def FF(self, xx, w, name):
if name not in self.xx:
self.xx[name] = torch.zeros(self.n_embd, device=self.RUN_DEVICE)
xk = xx * w.time_mix_k + self.xx[name] * (1 - w.time_mix_k)
xr = xx * w.time_mix_r + self.xx[name] * (1 - w.time_mix_r)
self.xx[name] = xx
r = torch.sigmoid(w.receptance.weight @ xr)
k = torch.square(torch.relu(w.key.weight @ xk))
kv = w.value.weight @ k
return r * kv
def SA(self, xx, w, name):
if name not in self.xx:
self.xx[name] = torch.zeros(self.n_embd, device=self.RUN_DEVICE)
self.aa[name] = torch.zeros(self.n_embd, device=self.RUN_DEVICE)
self.bb[name] = torch.zeros(self.n_embd, device=self.RUN_DEVICE)
self.pp[name] = torch.zeros(self.n_embd, device=self.RUN_DEVICE) - 1e30
xk = xx * w.time_mix_k + self.xx[name] * (1 - w.time_mix_k)
xv = xx * w.time_mix_v + self.xx[name] * (1 - w.time_mix_v)
xr = xx * w.time_mix_r + self.xx[name] * (1 - w.time_mix_r)
self.xx[name] = xx
r = torch.sigmoid(w.receptance.weight @ xr)
k = w.key.weight @ xk
v = w.value.weight @ xv
pp = self.pp[name]
aa = self.aa[name]
bb = self.bb[name]
ww = w.time_first + k
p = torch.maximum(pp, ww)
e1 = torch.exp(pp - p)
e2 = torch.exp(ww - p)
a = e1 * aa + e2 * v
b = e1 * bb + e2
ww = pp + w.time_decay
p = torch.maximum(ww, k)
e1 = torch.exp(ww - p)
e2 = torch.exp(k - p)
self.aa[name] = e1 * aa + e2 * v
self.bb[name] = e1 * bb + e2
self.pp[name] = p
rwkv = r * a / b
return w.output.weight @ rwkv
def run(self, ctx):
w = self.w
x = w.emb.weight[ctx[-1]]
for i in range(self.n_layer):
if i == 0:
x = self.LN(x, w.blocks[i].ln0)
if i == 0 and self.model_type == 'RWKV-ffnPre':
x = x + self.FF(self.LN(x, w.blocks[i].ln1), w.blocks[i].ffnPre, f'ffnPre.{i}')
else:
x = x + self.SA(self.LN(x, w.blocks[i].ln1), w.blocks[i].att, f'att.{i}')
x = x + self.FF(self.LN(x, w.blocks[i].ln2), w.blocks[i].ffn, f'ffn.{i}')
x = self.LN(x, w.ln_out)
if RWKV_HEAD_QK_DIM > 0:
if self.hk == None:
self.hk = (w.head_k.weight @ x).unsqueeze(0)
else:
self.hk = torch.cat(
[self.hk, (w.head_k.weight @ x).unsqueeze(0)], dim=0)
if self.hk.shape[0] > self.ctx_len:
self.hk = self.hk[-self.ctx_len:, :]
q = w.head_q.weight @ x
x = w.head.weight @ x
x = x.cpu().numpy().tolist()
c = (self.hk @ q) / RWKV_HEAD_QK_DIM
for i in range(len(c)):
x[ctx[i]] += c[i]
else:
x = w.head.weight @ x
x = x.cpu().numpy().tolist()
return x

95
RWKV4/src/utils.py Normal file
View File

@@ -0,0 +1,95 @@
########################################################################################################
# The RWKV Language Model - https://github.com/BlinkDL/RWKV-LM
########################################################################################################
import os
try:
NUM_GPUS = int(os.environ['RWKV_NUM_GPUS'])
except:
NUM_GPUS = 1
import json
import random
import numpy as np
import torch
from torch.nn import functional as F
class TOKENIZER():
def __init__(self, WORD_NAME, UNKNOWN_CHAR='\ue083'):
if 'list' in str(type(WORD_NAME)):
self.charMode = False
if WORD_NAME[0] == WORD_NAME[1]:
from transformers import PreTrainedTokenizerFast
self.tokenizer = PreTrainedTokenizerFast(tokenizer_file=WORD_NAME[0])
else:
from transformers import GPT2TokenizerFast
self.tokenizer = GPT2TokenizerFast(WORD_NAME[0], WORD_NAME[1])
self.vocab_size = len(self.tokenizer)
else:
self.charMode = True
with open(WORD_NAME + '.json', "r", encoding="utf-16") as result_file:
self.word_table = json.load(result_file)
self.vocab_size = len(self.word_table)
self.stoi = {v: int(k) for k, v in self.word_table.items()}
self.itos = {int(k): v for k, v in self.word_table.items()}
self.UNKNOWN_CHAR = self.stoi[UNKNOWN_CHAR]
def refine_context(self, context):
context = context.strip().split('\n')
for c in range(len(context)):
context[c] = context[c].strip().strip('\u3000').strip('\r')
context = list(filter(lambda c: c != '', context))
context = '\n' + ('\n'.join(context)).strip()
if context == '':
context = '\n'
return context
def sample_logits(self, out, x, ctx_len, temperature=1.0, top_p_usual=None, top_p_newline=None):
# out[self.UNKNOWN_CHAR] = -float('Inf')
lastChar = int(x[-1])
probs = F.softmax(torch.tensor(out), dim=-1)
if self.charMode:
if self.itos[lastChar] == '\n':
top_p = top_p_newline
else:
top_p = top_p_usual
else:
top_p = top_p_usual
sorted_probs, s_index = torch.sort(probs, descending=True)
# for j in range(30):
# pp = sorted_probs[j].item()
# if pp < 0.005:
# break
# ss = self.itos[int(s_index[j])].replace('\n','_')
# print(f'{math.floor(pp*100):>3.0f}{ss}', end='')
# print('')
cumulative_probs = torch.cumsum(sorted_probs, dim=-1).numpy()
cutoff = float(sorted_probs[np.argmax(cumulative_probs > top_p)])
probs[probs < cutoff] = 0
# print("[" + str(round(cutoff,4)) + ' ' + str(round(to_float(sum(probs)),3)) + "]", end = "")
if temperature != 1.0:
probs = probs.pow(1.0 / temperature)
return torch.multinomial(probs, num_samples=1)[0]
def to_float(x):
return x.cpu().detach().numpy().flatten()[0].astype(float)
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed_all(seed)

File diff suppressed because it is too large Load Diff

173
attention_bias.py Normal file
View File

@@ -0,0 +1,173 @@
# All OPT code is under the following license:
# Copyright 2022 The Fairseq Authors and The HuggingFace Inc. team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import torch
from torch import nn
import transformers
from typing import Optional, Tuple
from typing import Optional, Tuple
has_harassed_user = False
attention_bias = None
# Attention patch for attention bias
def OPTAttention_forward(
self,
hidden_states: torch.Tensor,
key_value_states: Optional[torch.Tensor] = None,
past_key_value: Optional[Tuple[torch.Tensor]] = None,
attention_mask: Optional[torch.Tensor] = None,
layer_head_mask: Optional[torch.Tensor] = None,
output_attentions: bool = False,
) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
global attention_bias
global has_harassed_user
"""Input shape: Batch x Time x Channel"""
# if key_value_states are provided this layer is used as a cross-attention layer
# for the decoder
is_cross_attention = key_value_states is not None
bsz, tgt_len, _ = hidden_states.size()
# get query proj
query_states = self.q_proj(hidden_states) * self.scaling
# get key, value proj
if is_cross_attention and past_key_value is not None:
# reuse k,v, cross_attentions
key_states = past_key_value[0]
value_states = past_key_value[1]
elif is_cross_attention:
# cross_attentions
key_states = self._shape(self.k_proj(key_value_states), -1, bsz)
value_states = self._shape(self.v_proj(key_value_states), -1, bsz)
elif past_key_value is not None:
# reuse k, v, self_attention
key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
key_states = torch.cat([past_key_value[0], key_states], dim=2)
value_states = torch.cat([past_key_value[1], value_states], dim=2)
else:
# self_attention
key_states = self._shape(self.k_proj(hidden_states), -1, bsz)
value_states = self._shape(self.v_proj(hidden_states), -1, bsz)
if self.is_decoder:
# if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states.
# Further calls to cross_attention layer can then reuse all cross-attention
# key/value_states (first "if" case)
# if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of
# all previous decoder key/value_states. Further calls to uni-directional self-attention
# can concat previous decoder key/value_states to current projected key/value_states (third "elif" case)
# if encoder bi-directional self-attention `past_key_value` is always `None`
past_key_value = (key_states, value_states)
proj_shape = (bsz * self.num_heads, -1, self.head_dim)
query_states = self._shape(query_states, tgt_len, bsz).view(*proj_shape)
key_states = key_states.view(*proj_shape)
value_states = value_states.view(*proj_shape)
src_len = key_states.size(1)
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
raise ValueError(
f"Attention weights should be of size {(bsz * self.num_heads, tgt_len, src_len)}, but is"
f" {attn_weights.size()}"
)
if attention_mask is not None:
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
raise ValueError(
f"Attention mask should be of size {(bsz, 1, tgt_len, src_len)}, but is {attention_mask.size()}"
)
attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) + attention_mask
attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min))
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
dtype_attn_weights = attn_weights.dtype
# upcast to fp32 if the weights are in fp16. Please see https://github.com/huggingface/transformers/pull/17437
if dtype_attn_weights == torch.float16:
attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(dtype_attn_weights)
else:
attn_weights = nn.functional.softmax(attn_weights, dim=-1)
if layer_head_mask is not None:
if layer_head_mask.size() != (self.num_heads,):
raise ValueError(
f"Head mask for a single layer should be of size {(self.num_heads,)}, but is"
f" {layer_head_mask.size()}"
)
attn_weights = layer_head_mask.view(1, -1, 1, 1) * attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len)
if output_attentions:
# this operation is a bit awkward, but it's required to
# make sure that attn_weights keeps its gradient.
# In order to do so, attn_weights have to be reshaped
# twice and have to be reused in the following
attn_weights_reshaped = attn_weights.view(bsz, self.num_heads, tgt_len, src_len)
attn_weights = attn_weights_reshaped.view(bsz * self.num_heads, tgt_len, src_len)
else:
attn_weights_reshaped = None
attn_probs = nn.functional.dropout(attn_weights, p=self.dropout, training=self.training)
# BEGIN ATTENTION BIAS
if attention_bias is not None and self.is_decoder:
if not has_harassed_user:
print("[attention] Applying attention bias (will not show this again!!!!!!!!!!!)")
has_harassed_user = True
extra_tokens = attn_probs.shape[2] - attention_bias.shape[0]
# Obviously we add tokens during generation
att = nn.functional.pad(attention_bias, pad=(0, extra_tokens), value=1)
# May be slow, not sure
att = att.to(attn_probs.device)
attn_probs[:, :, :] *= att
attn_probs = nn.functional.normalize(attn_probs, p=1, dim=2)
# END ATTENTION BIAS
attn_output = torch.bmm(attn_probs, value_states)
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
raise ValueError(
f"`attn_output` should be of size {(bsz, self.num_heads, tgt_len, self.head_dim)}, but is"
f" {attn_output.size()}"
)
attn_output = attn_output.view(bsz, self.num_heads, tgt_len, self.head_dim)
attn_output = attn_output.transpose(1, 2)
# Use the `embed_dim` from the config (stored in the class) rather than `hidden_state` because `attn_output` can be
# partitioned aross GPUs when using tensor-parallelism.
attn_output = attn_output.reshape(bsz, tgt_len, self.embed_dim)
attn_output = self.out_proj(attn_output)
return attn_output, attn_weights_reshaped, past_key_value
# Patch patch patch!
def do_patches():
transformers.models.opt.modeling_opt.OPTAttention.forward = OPTAttention_forward

View File

@@ -380,7 +380,7 @@ return function(_python, _bridged)
---@return boolean
function KoboldWorldInfoEntry:is_valid()
return _python.as_attrgetter(bridged.vars.worldinfo_u).get(rawget(self, "_uid")) ~= nil
return _python.as_attrgetter(bridged.koboldai_vars.worldinfo_u).get(rawget(self, "_uid")) ~= nil
end
---@param submission? string
@@ -475,7 +475,7 @@ return function(_python, _bridged)
if not check_validity(self) or type(u) ~= "number" then
return
end
local query = _python.as_attrgetter(bridged.vars.worldinfo_u).get(u)
local query = _python.as_attrgetter(bridged.koboldai_vars.worldinfo_u).get(u)
if query == nil or (rawget(self, "_name") == "KoboldWorldInfoFolder" and self.uid ~= _python.as_attrgetter(query).get("folder")) then
return
end
@@ -522,7 +522,7 @@ return function(_python, _bridged)
---@return boolean
function KoboldWorldInfoFolder:is_valid()
return _python.as_attrgetter(bridged.vars.wifolders_d).get(rawget(self, "_uid")) ~= nil
return _python.as_attrgetter(bridged.koboldai_vars.wifolders_d).get(rawget(self, "_uid")) ~= nil
end
---@param t KoboldWorldInfoFolder
@@ -531,7 +531,7 @@ return function(_python, _bridged)
if not check_validity(t) then
return 0
end
return math.tointeger(_python.builtins.len(_python.as_attrgetter(bridged.vars.wifolders_u).get(t.uid))) - 1
return math.tointeger(_python.builtins.len(_python.as_attrgetter(bridged.koboldai_vars.wifolders_u).get(t.uid))) - 1
end
KoboldWorldInfoFolder_mt._kobold_next = KoboldWorldInfoEntry_mt._kobold_next
@@ -548,7 +548,7 @@ return function(_python, _bridged)
elseif rawget(t, "_name") == "KoboldWorldInfoFolder" and k == "name" then
return bridged.folder_get_attr(t.uid, k)
elseif type(k) == "number" then
local query = rawget(t, "_name") == "KoboldWorldInfoFolder" and _python.as_attrgetter(bridged.vars.wifolders_u).get(t.uid) or bridged.vars.worldinfo_i
local query = rawget(t, "_name") == "KoboldWorldInfoFolder" and _python.as_attrgetter(bridged.koboldai_vars.wifolders_u).get(t.uid) or bridged.koboldai_vars.worldinfo_i
k = math.tointeger(k)
if k == nil or k < 1 or k > #t then
return
@@ -599,7 +599,7 @@ return function(_python, _bridged)
if not check_validity(self) or type(u) ~= "number" then
return
end
local query = _python.as_attrgetter(bridged.vars.wifolders_d).get(u)
local query = _python.as_attrgetter(bridged.koboldai_vars.wifolders_d).get(u)
if query == nil then
return
end
@@ -619,7 +619,7 @@ return function(_python, _bridged)
if not check_validity(t) then
return 0
end
return _python.builtins.len(bridged.vars.wifolders_l)
return _python.builtins.len(bridged.koboldai_vars.wifolders_l)
end
KoboldWorldInfoFolderSelector_mt._kobold_next = KoboldWorldInfoEntry_mt._kobold_next
@@ -633,7 +633,7 @@ return function(_python, _bridged)
return
end
local folder = deepcopy(KoboldWorldInfoFolder)
rawset(folder, "_uid", math.tointeger(bridged.vars.wifolders_l[k-1]))
rawset(folder, "_uid", math.tointeger(bridged.koboldai_vars.wifolders_l[k-1]))
return folder
end
@@ -672,7 +672,7 @@ return function(_python, _bridged)
if not check_validity(t) then
return 0
end
return math.tointeger(_python.builtins.len(bridged.vars.worldinfo)) - math.tointeger(_python.builtins.len(bridged.vars.wifolders_l)) - 1
return math.tointeger(_python.builtins.len(bridged.koboldai_vars.worldinfo)) - math.tointeger(_python.builtins.len(bridged.koboldai_vars.wifolders_l)) - 1
end
KoboldWorldInfo_mt._kobold_next = KoboldWorldInfoEntry_mt._kobold_next
@@ -725,12 +725,12 @@ return function(_python, _bridged)
end
if k == "content" then
if rawget(t, "_num") == 0 then
if bridged.vars.gamestarted then
local prompt = koboldbridge.userstate == "genmod" and bridged.vars._prompt or bridged.vars.prompt
if bridged.koboldai_vars.gamestarted then
local prompt = koboldbridge.userstate == "genmod" and bridged.koboldai_vars._prompt or bridged.koboldai_vars.prompt
return prompt
end
end
local actions = koboldbridge.userstate == "genmod" and bridged.vars._actions or bridged.vars.actions
local actions = koboldbridge.userstate == "genmod" and bridged.koboldai_vars.actions
return _python.as_attrgetter(actions).get(math.tointeger(rawget(t, "_num")) - 1)
end
end
@@ -752,7 +752,7 @@ return function(_python, _bridged)
error("Attempted to set the prompt chunk's content to the empty string; this is not allowed")
return
end
local actions = koboldbridge.userstate == "genmod" and bridged.vars._actions or bridged.vars.actions
local actions = koboldbridge.userstate == "genmod" and bridged.koboldai_vars.actions
if _k ~= 0 and _python.as_attrgetter(actions).get(_k-1) == nil then
return
end
@@ -777,11 +777,11 @@ return function(_python, _bridged)
---@return fun(): KoboldStoryChunk, table, nil
function KoboldStory:forward_iter()
local actions = koboldbridge.userstate == "genmod" and bridged.vars._actions or bridged.vars.actions
local actions = koboldbridge.userstate == "genmod" and bridged.koboldai_vars.actions
local nxt, iterator = _python.iter(actions)
local run_once = false
local function f()
if not bridged.vars.gamestarted then
if not bridged.koboldai_vars.gamestarted then
return
end
local chunk = deepcopy(KoboldStoryChunk)
@@ -805,11 +805,11 @@ return function(_python, _bridged)
---@return fun(): KoboldStoryChunk, table, nil
function KoboldStory:reverse_iter()
local actions = koboldbridge.userstate == "genmod" and bridged.vars._actions or bridged.vars.actions
local actions = koboldbridge.userstate == "genmod" and bridged.koboldai_vars.actions
local nxt, iterator = _python.iter(_python.builtins.reversed(actions))
local last_run = false
local function f()
if not bridged.vars.gamestarted or last_run then
if not bridged.koboldai_vars.gamestarted or last_run then
return
end
local chunk = deepcopy(KoboldStoryChunk)
@@ -1039,7 +1039,7 @@ return function(_python, _bridged)
---@param t KoboldLib
---@return string
function KoboldLib_getters.submission(t)
return bridged.vars.submission
return bridged.koboldai_vars.submission
end
---@param t KoboldLib
@@ -1051,11 +1051,11 @@ return function(_python, _bridged)
elseif type(v) ~= "string" then
error("`KoboldLib.submission` must be a string; you attempted to set it to a " .. type(v))
return
elseif not bridged.vars.gamestarted and v == "" then
elseif not bridged.koboldai_vars.gamestarted and v == "" then
error("`KoboldLib.submission` must not be set to the empty string when the story is empty")
return
end
bridged.vars.submission = v
bridged.koboldai_vars.submission = v
end
@@ -1100,7 +1100,7 @@ return function(_python, _bridged)
---@param t KoboldLib
---@return string
function KoboldLib_getters.model(t)
return bridged.vars.model
return bridged.koboldai_vars.model
end
---@param t KoboldLib
@@ -1136,7 +1136,7 @@ return function(_python, _bridged)
---@param t KoboldLib
---@return string
function KoboldLib_getters.custmodpth(t)
return bridged.vars.custmodpth
return bridged.koboldai_vars.custmodpth
end
---@param t KoboldLib
@@ -2013,7 +2013,7 @@ return function(_python, _bridged)
koboldbridge.userstate = "genmod"
if koboldbridge.genmod ~= nil then
local _generated = deepcopy(koboldbridge.generated)
if not bridged.vars.nogenmod then
if not bridged.koboldai_vars.nogenmod then
r = koboldbridge.genmod()
end
setmetatable(koboldbridge.logits, nil)

View File

@@ -6,7 +6,6 @@
"name": "ColabKobold GPU",
"private_outputs": true,
"provenance": [],
"collapsed_sections": [],
"include_colab_link": true
},
"kernelspec": {
@@ -40,7 +39,23 @@
"\n",
"For more information about KoboldAI check our our Github readme : https://github.com/KoboldAI/KoboldAI-Client/blob/main/readme.md\n",
"\n",
"For the larger AI models (That are typically more coherent) check out our **[TPU edition](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)**!"
"For the larger AI models (That are typically more coherent) check out our **[TPU edition](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)**!\n",
"\n",
"---\n",
"## How to load KoboldAI: Everything you need to know\n",
"1. On a phone? First put your browser in desktop mode because of a Google Colab bug. Otherwise nothing will happen when you click the play button. Then tap the play button next to \"<-- Tap This if you play on Mobile\", you will see an audio player. Keep the audio player playing so Colab does not get shut down in the background.\n",
"2. Select the desired model, you will find a description of all the available models further down the page.\n",
"3. Click the play button next to \"<-- Select your model below and then click this to start KoboldAI\".\n",
"4. Got a message saying no accelerator is available? Click cancel, and try again in a few minutes. If you do not manage to get a session when you frequently try again try at a different time of day, colab can be busy or your priority may have been lowered by frequent usage.\n",
"5. After everything is done loading you will get a link that you can use to open KoboldAI. In case of Localtunnel you will also be warned that some people are abusing Localtunnel for phishing, once you acknowledge this warning you will be taken to KoboldAI's interface. If you picked Cloudflare and get a 1033 error refresh the error page after waiting one minute.\n",
"\n",
"---\n",
"\n",
"Further down the page you can find descriptions of the models, and tips to get the most out of your Google Colab experience.\n",
"\n",
"Make sure to keep this page open while you are using KoboldAI, and check back regularly to see if you got a Captcha. Failure to complete the captcha's in time can result in termination of your session or a lower priority towards the TPUs.\n",
"\n",
"Firefox users need to disable the enhanced tracking protection or use a different browser in order to be able to use Google Colab without errors (This is not something we can do anything about, the cookie blocker breaks the Google Drive integration because it uses different domains)."
]
},
{
@@ -67,23 +82,55 @@
"#@title <b><-- Select your model below and then click this to start KoboldAI</b>\n",
"#@markdown You can find a description of the models below along with instructions on how to start KoboldAI.\n",
"\n",
"Model = \"Nerys 2.7B\" #@param [\"Nerys 2.7B\", \"AID 2.7B\", \"Erebus 2.7B\", \"Janeway 2.7B\", \"Picard 2.7B\", \"Horni LN 2.7B\", \"Horni 2.7B\", \"Shinen 2.7B\", \"OPT 2.7B\", \"Fairseq Dense 2.7B\", \"Neo 2.7B\"] {allow-input: true}\n",
"Model = \"Nerys V2 6B\" #@param [\"Nerys V2 6B\", \"Erebus 6B\", \"Skein 6B\", \"Janeway 6B\", \"Adventure 6B\", \"Lit V2 6B\", \"Lit 6B\", \"Shinen 6B\", \"Nerys 2.7B\", \"AID 2.7B\", \"Erebus 2.7B\", \"Janeway 2.7B\", \"Picard 2.7B\", \"Horni LN 2.7B\", \"Horni 2.7B\", \"Shinen 2.7B\", \"OPT 2.7B\", \"Fairseq Dense 2.7B\", \"Neo 2.7B\"] {allow-input: true}\n",
"Version = \"Official\" #@param [\"Official\", \"United\"] {allow-input: true}\n",
"Provider = \"Localtunnel\" #@param [\"Localtunnel\", \"Cloudflare\"]\n",
"use_google_drive = True #@param {type:\"boolean\"}\n",
"use_google_drive = True #@param {type:\"boolean\"}\n",
"\n",
"!nvidia-smi\n",
"from google.colab import drive\n",
"if use_google_drive:\n",
" drive.mount('/content/drive/')\n",
"else:\n",
" import os\n",
" if not os.path.exists(\"/content/drive\"):\n",
" os.mkdir(\"/content/drive\")\n",
" if not os.path.exists(\"/content/drive/MyDrive/\"):\n",
" os.mkdir(\"/content/drive/MyDrive/\")\n",
" drive.mount('/content/drive/')\n",
"else:\n",
" import os\n",
" if not os.path.exists(\"/content/drive\"):\n",
" os.mkdir(\"/content/drive\")\n",
" if not os.path.exists(\"/content/drive/MyDrive/\"):\n",
" os.mkdir(\"/content/drive/MyDrive/\")\n",
"\n",
"if Model == \"Nerys 2.7B\":\n",
"if Model == \"Nerys V2 6B\":\n",
" Model = \"KoboldAI/OPT-6B-nerys-v2\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Erebus 6B\":\n",
" Model = \"KoboldAI/OPT-6.7B-Erebus\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Skein 6B\":\n",
" Model = \"KoboldAI/GPT-J-6B-Skein\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Janeway 6B\":\n",
" Model = \"KoboldAI/GPT-J-6B-Janeway\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Adventure 6B\":\n",
" Model = \"KoboldAI/GPT-J-6B-Adventure\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Lit V2 6B\":\n",
" Model = \"hakurei/litv2-6B-rev3\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Lit 6B\":\n",
" Model = \"hakurei/lit-6B\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Shinen 6B\":\n",
" Model = \"KoboldAI/GPT-J-6B-Shinen\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Nerys 2.7B\":\n",
" Model = \"KoboldAI/fairseq-dense-2.7B-Nerys\"\n",
" path = \"\"\n",
" download = \"\"\n",

View File

@@ -98,6 +98,9 @@ mkdir /content/drive/MyDrive/KoboldAI/models/
mkdir /content/drive/MyDrive/KoboldAI/settings/
mkdir /content/drive/MyDrive/KoboldAI/softprompts/
mkdir /content/drive/MyDrive/KoboldAI/userscripts/
mkdir /content/drive/MyDrive/KoboldAI/presets/
mkdir /content/drive/MyDrive/KoboldAI/themes/
if [ "$init" == "drive" ]; then
echo Google Drive folders created.
exit 0
@@ -136,11 +139,15 @@ if [ "$init" != "skip" ]; then
git reset --hard origin/$(git_default_branch)
fi
git submodule update --init --recursive
cd /content/KoboldAI-Client
cp -rn stories/* /content/drive/MyDrive/KoboldAI/stories/
cp -rn userscripts/* /content/drive/MyDrive/KoboldAI/userscripts/
cp -rn softprompts/* /content/drive/MyDrive/KoboldAI/softprompts/
cp -rn presets/* /content/drive/MyDrive/KoboldAI/presets/
cp -rn themes/* /content/drive/MyDrive/KoboldAI/themes/
rm stories
rm -rf stories/
rm userscripts
@@ -149,11 +156,17 @@ if [ "$init" != "skip" ]; then
rm -rf softprompts/
rm models
rm -rf models/
rm presets
rm -rf presets/
rm themes
rm -rf themes/
ln -s /content/drive/MyDrive/KoboldAI/stories/ stories
ln -s /content/drive/MyDrive/KoboldAI/settings/ settings
ln -s /content/drive/MyDrive/KoboldAI/softprompts/ softprompts
ln -s /content/drive/MyDrive/KoboldAI/userscripts/ userscripts
ln -s /content/drive/MyDrive/KoboldAI/models/ models
ln -s /content/drive/MyDrive/KoboldAI/presets/ presets
ln -s /content/drive/MyDrive/KoboldAI/themes/ themes
if [ -n "${COLAB_TPU_ADDR+set}" ]; then
pip install -r requirements_mtj.txt

BIN
data/empty_audio.ogg Normal file

Binary file not shown.

307
data/genres.json Normal file
View File

@@ -0,0 +1,307 @@
[
"Absurdist",
"Action & Adventure",
"Adaptations & Pastiche",
"African American & Black/General",
"African American & Black/Christian",
"African American & Black/Erotica",
"African American & Black/Historical",
"African American & Black/Mystery & Detective",
"African American & Black/Urban & Street Lit",
"African American & Black/Women",
"Alternative History",
"Amish & Mennonite",
"Animals",
"Anthologies (multiple authors)",
"Asian American",
"Biographical",
"Buddhist",
"Christian/General",
"Christian/Biblical",
"Christian/Classic & Allegory",
"Christian/Collections & Anthologies",
"Christian/Contemporary",
"Christian/Fantasy",
"Christian/Futuristic",
"Christian/Historical",
"Christian/Romance/General",
"Christian/Romance/Historical",
"Christian/Romance/Suspense",
"Christian/Suspense",
"Christian/Western",
"City Life",
"Classics",
"Coming of Age",
"Crime",
"Cultural Heritage",
"Disabilities & Special Needs",
"Disaster",
"Dystopian",
"Epistolary",
"Erotica/General",
"Erotica/BDSM",
"Erotica/Collections & Anthologies",
"Erotica/Historical",
"Erotica/LGBTQ+/General",
"Erotica/LGBTQ+/Bisexual",
"Erotica/LGBTQ+/Gay",
"Erotica/LGBTQ+/Lesbian",
"Erotica/LGBTQ+/Transgender",
"Erotica/Science Fiction, Fantasy & Horror",
"Fairy Tales, Folk Tales, Legends & Mythology",
"Family Life/General",
"Family Life/Marriage & Divorce",
"Family Life/Siblings",
"Fantasy/General",
"Fantasy/Action & Adventure",
"Fantasy/Arthurian",
"Fantasy/Collections & Anthologies",
"Fantasy/Contemporary",
"Fantasy/Dark Fantasy",
"Fantasy/Dragons & Mythical Creatures",
"Fantasy/Epic",
"Fantasy/Gaslamp",
"Fantasy/Historical",
"Fantasy/Humorous",
"Fantasy/Military",
"Fantasy/Paranormal",
"Fantasy/Romance",
"Fantasy/Urban",
"Feminist",
"Friendship",
"Ghost",
"Gothic",
"Hispanic & Latino",
"Historical/General",
"Historical/Ancient",
"Historical/Civil War Era",
"Historical/Colonial America & Revolution",
"Historical/Medieval",
"Historical/Renaissance",
"Historical/World War I",
"Historical/World War II",
"Holidays",
"Horror",
"Humorous/General",
"Humorous/Black Humor",
"Indigenous",
"Jewish",
"Legal",
"LGBTQ+/General",
"LGBTQ+/Bisexual",
"LGBTQ+/Gay",
"LGBTQ+/Lesbian",
"LGBTQ+/Transgender",
"Literary",
"LitRPG (Literary Role-Playing Game)",
"Magical Realism",
"Mashups",
"Media Tie-In",
"Medical",
"Multiple Timelines",
"Muslim",
"Mystery & Detective/General",
"Mystery & Detective/Amateur Sleuth",
"Mystery & Detective/Collections & Anthologies",
"Mystery & Detective/Cozy/General",
"Mystery & Detective/Cozy/Animals",
"Mystery & Detective/Cozy/Crafts",
"Mystery & Detective/Cozy/Culinary",
"Mystery & Detective/Cozy/Holidays & Vacation",
"Mystery & Detective/Cozy/Paranormal",
"Mystery & Detective/Hard-Boiled",
"Mystery & Detective/Historical",
"Mystery & Detective/International Crime & Mystery",
"Mystery & Detective/Jewish",
"Mystery & Detective/Police Procedural",
"Mystery & Detective/Private Investigators",
"Mystery & Detective/Traditional",
"Mystery & Detective/Women Sleuths",
"Nature & the Environment",
"Noir",
"Occult & Supernatural",
"Own Voices",
"Political",
"Psychological",
"Religious",
"Romance/General",
"Romance/Action & Adventure",
"Romance/African American & Black",
"Romance/Billionaires",
"Romance/Clean & Wholesome",
"Romance/Collections & Anthologies",
"Romance/Contemporary",
"Romance/Erotic",
"Romance/Fantasy",
"Romance/Firefighters",
"Romance/Historical/General",
"Romance/Historical/American",
"Romance/Historical/Ancient World",
"Romance/Historical/Gilded Age",
"Romance/Historical/Medieval",
"Romance/Historical/Regency",
"Romance/Historical/Renaissance",
"Romance/Historical/Scottish",
"Romance/Historical/Tudor",
"Romance/Historical/20th Century",
"Romance/Historical/Victorian",
"Romance/Historical/Viking",
"Romance/Holiday",
"Romance/Later in Life",
"Romance/LGBTQ+/General",
"Romance/LGBTQ+/Bisexual",
"Romance/LGBTQ+/Gay",
"Romance/LGBTQ+/Lesbian",
"Romance/LGBTQ+/Transgender",
"Romance/Medical",
"Romance/Military",
"Romance/Multicultural & Interracial",
"Romance/New Adult",
"Romance/Paranormal/General",
"Romance/Paranormal/Shifters",
"Romance/Paranormal/Vampires",
"Romance/Paranormal/Witches",
"Romance/Police & Law Enforcement",
"Romance/Polyamory",
"Romance/Rock Stars",
"Romance/Romantic Comedy",
"Romance/Royalty",
"Romance/Science Fiction",
"Romance/Sports",
"Romance/Suspense",
"Romance/Time Travel",
"Romance/Western",
"Romance/Workplace",
"Sagas",
"Satire",
"Science Fiction/General",
"Science Fiction/Action & Adventure",
"Science Fiction/Alien Contact",
"Science Fiction/Apocalyptic & Post-Apocalyptic",
"Science Fiction/Collections & Anthologies",
"Science Fiction/Crime & Mystery",
"Science Fiction/Cyberpunk",
"Science Fiction/Genetic Engineering",
"Science Fiction/Hard Science Fiction",
"Science Fiction/Humorous",
"Science Fiction/Military",
"Science Fiction/Space Exploration",
"Science Fiction/Space Opera",
"Science Fiction/Steampunk",
"Science Fiction/Time Travel",
"Sea Stories",
"Short Stories (single author)",
"Small Town & Rural",
"Southern",
"Sports",
"Superheroes",
"Thrillers/General",
"Thrillers/Crime",
"Thrillers/Domestic",
"Thrillers/Espionage",
"Thrillers/Historical",
"Thrillers/Legal",
"Thrillers/Medical",
"Thrillers/Military",
"Thrillers/Political",
"Thrillers/Psychological",
"Thrillers/Supernatural",
"Thrillers/Suspense",
"Thrillers/Technological",
"Thrillers/Terrorism",
"Urban & Street Lit",
"Visionary & Metaphysical",
"War & Military",
"Westerns",
"Women",
"World Literature/Africa/General",
"World Literature/Africa/East Africa",
"World Literature/Africa/Nigeria",
"World Literature/Africa/Southern Africa",
"World Literature/Africa/West Africa",
"World Literature/American/General",
"World Literature/American/Colonial & Revolutionary Periods",
"World Literature/American/19th Century",
"World Literature/American/20th Century",
"World Literature/American/21st Century",
"World Literature/Argentina",
"World Literature/Asia (General)",
"World Literature/Australia",
"World Literature/Austria",
"World Literature/Brazil",
"World Literature/Canada/General",
"World Literature/Canada/Colonial & 19th Century",
"World Literature/Canada/20th Century",
"World Literature/Canada/21st Century",
"World Literature/Caribbean & West Indies",
"World Literature/Central America",
"World Literature/Chile",
"World Literature/China/General",
"World Literature/China/19th Century",
"World Literature/China/20th Century",
"World Literature/China/21st Century",
"World Literature/Colombia",
"World Literature/Czech Republic",
"World Literature/Denmark",
"World Literature/England/General",
"World Literature/England/Early & Medieval Periods",
"World Literature/England/16th & 17th Century",
"World Literature/England/18th Century",
"World Literature/England/19th Century",
"World Literature/England/20th Century",
"World Literature/England/21st Century",
"World Literature/Europe (General)",
"World Literature/Finland",
"World Literature/France/General",
"World Literature/France/18th Century",
"World Literature/France/19th Century",
"World Literature/France/20th Century",
"World Literature/France/21st Century",
"World Literature/Germany/General",
"World Literature/Germany/20th Century",
"World Literature/Germany/21st Century",
"World Literature/Greece",
"World Literature/Hungary",
"World Literature/India/General",
"World Literature/India/19th Century",
"World Literature/India/20th Century",
"World Literature/India/21st Century",
"World Literature/Ireland/General",
"World Literature/Ireland/19th Century",
"World Literature/Ireland/20th Century",
"World Literature/Ireland/21st Century",
"World Literature/Italy",
"World Literature/Japan",
"World Literature/Korea",
"World Literature/Mexico",
"World Literature/Middle East/General",
"World Literature/Middle East/Arabian Peninsula",
"World Literature/Middle East/Egypt & North Africa",
"World Literature/Middle East/Israel",
"World Literature/Netherlands",
"World Literature/New Zealand",
"World Literature/Norway",
"World Literature/Oceania",
"World Literature/Pakistan",
"World Literature/Peru",
"World Literature/Poland",
"World Literature/Portugal",
"World Literature/Russia/General",
"World Literature/Russia/19th Century",
"World Literature/Russia/20th Century",
"World Literature/Russia/21st Century",
"World Literature/Scotland/General",
"World Literature/Scotland/19th Century",
"World Literature/Scotland/20th Century",
"World Literature/Scotland/21st Century",
"World Literature/South America (General)",
"World Literature/Southeast Asia",
"World Literature/Spain/General",
"World Literature/Spain/19th Century",
"World Literature/Spain/20th Century",
"World Literature/Spain/21st Century",
"World Literature/Sweden",
"World Literature/Turkey",
"World Literature/Uruguay",
"World Literature/Wales"
]

35
data/wi_fewshot.txt Normal file
View File

@@ -0,0 +1,35 @@
Title: Note Day
Type: Holiday
Description: On Note Day, everyone in the nation celebrates the day when they were born. This is also the day when children are given their first notebook and pencils. It is considered a national holiday. Note Day is usually celebrated on the third Saturday of every month.
Title: Oaksville
Type: Town
Description: Oaksville is a small town located in the northeastern part of the United States. Oaksville has a population of around 8,000. It also has a high crime rate, and the town is also home to a large prison. In the past, the town was a center of manufacturing, but now that manufacturing has moved elsewhere, the town has fallen on hard times.
Title: Lemonfooted Gnome
Type: Species
Description: Lemonfooted Gnomes are small, slender, and very nimble creatures, which are usually found in dry, arid areas. They are often mistaken for leprechauns, but they are actually a separate species of gnome. They have small, pointed ears and a set of yellow eyes. They also have a mohawk-like hairstyle, but the hair is not as curly as a typical leprechaun's. Their skin is pinkish-white, and their clothing is typically made out of a light fabric that glows in the dark.
Title: Wild Yaerylberry
Type: Fruit
Description: A wild yaerylberry is a fruit native to the southern continent. It resembles a small, spherical blueberry. It is said to be delicious, but is not eaten often because it is difficult to find. The fruits are also very poisonous. When eaten, it causes nausea, vomiting, and diarrhea. The fruit is the only known source of the drug amylase, which is used in the creation of the drug "Yaeryl". The fruit has a high concentration of amylase, so it is used in the production of Yaeryl by humans.
Title: Golem
Type: Species
Description: Golem are the product of magical experimentation, and they are generally considered to be dangerous creatures. They are usually created by wizards or sorcerers to do their bidding. They are often used in combat, although they can also be used for other purposes. They are tall, bulky, and have large heads. They are usually made out of stone, metal, and often times, other materials as well. They are usually very powerful, but they are also very durable. They are often used in conjunction with other creatures. Golems are known to be extremely intelligent and very loyal to their masters.
Title: KR-23
Type: Firearm
Description: The KR-23 is a high-power, medium-range, gas-operated semi-automatic rifle. It fires a.45 ACP round, and is chambered for the standard NATO 7.62x51mm NATO cartridge. The KR-23 has a slightly thicker barrel than the AKM, and has a more modern design. The KR-23 is a very reliable weapon. It is also a lot more accurate than the AKM, and can be fitted with a suppressor. It is an extremely powerful weapon, and can easily kill anything it hits, including armored opponents.
Title: Brillnor
Type: Species
Description: Brillnor are a race of humanoids who dwell deep in the mountains of Aurthiem. They resemble dwarves, but are much taller, with longer, thicker necks and a single horn on the top of their head. They are also smaller in stature than dwarves, despite being able to reach much higher heights. Brillnor have a strong affinity for metal and tools, and they are skilled at crafting them. They are also very strong and durable, which is why they are often employed as miners.
Title: Michael
Type: Person
Description: Michael is a young man living in a small town in New Zealand. He's studying to become a teacher, and he's very interested in history and politics. He enjoys playing Dungeons & Dragons, and he's got a wonderful group of friends.
Title: Milvoch
Type: Kingdom
Description: Milvoch is a large kingdom located in the southern part of the continent. It is the largest kingdom in the entire continent, and it is home to a sizable population of elves. The city is surrounded by a large wall, which is topped by a massive watchtower. The majority of the people of Milvoch are elves, but the city is also home to a number of dwarves and humans. The government of the city is a hereditary monarchy, with King Avantur being the current ruler.

View File

@@ -0,0 +1,9 @@
FROM debian
WORKDIR /opt/koboldai
COPY ./environments /opt/koboldai/environments
COPY ./install_requirements.sh /opt/koboldai
USER root
RUN apt update && apt install wget aria2 git bzip2 python3 python3-venv -y
RUN ./install_requirements.sh cuda;rm -rf ~/.cache/pip
RUN git clone https://github.com/db0/KoboldAI-Horde-Bridge /opt/koboldai/KoboldAI-Horde-Bridge
ENV PATH=/opt/conda/bin/:$PATH

View File

@@ -0,0 +1,10 @@
FROM ebolam/koboldai_base
EXPOSE 5000/tcp
ENV remote=true
ENV quiet=true
ENV override_delete=true
ENV override_rename=true
ENV update=true
WORKDIR /opt/koboldai
COPY . /opt/koboldai
CMD ./docker-standalone/docker-helper_new.sh

View File

@@ -1,7 +1,7 @@
#!/bin/bash
cd /opt/koboldai
git pull
#./install_requirements.sh cuda
git pull --recurse-submodules && ./install_requirements.sh cuda
if [[ ! -v KOBOLDAI_DATADIR ]];then
mkdir /content

View File

@@ -0,0 +1,21 @@
#!/bin/bash
cd /opt/koboldai
if [[ -n update ]];then
git pull --recurse-submodules
fi
#The goal here is to allow any directory in /content to be mapped to the appropriate dir in the koboldai dir
if [[ ! -d "/content" ]];then
mkdir /content
fi
for FILE in /content/*
do
FILENAME="$(basename $FILE)"
rm -rf /opt/koboldai/$FILENAME
ln -s $FILE /opt/koboldai/
done
#Previous parameters are now env vars in the docker container so they can be overwritten as desired
PYTHONUNBUFFERED=1 ./play.sh

View File

@@ -21,12 +21,22 @@ dependencies:
- apispec-webframeworks
- loguru
- termcolor
- Pillow
- diffusers
- psutil
- pip:
- flask-cloudflared
- flask-ngrok
- lupa==1.10
- transformers>=4.20.1
- transformers==4.24.0
- huggingface_hub>=0.10.1
- accelerate
- git+https://github.com/VE-FORBRYDERNE/mkultra
- flask-session
- python-socketio[client]
- ansi2html
- flask_compress
- ijson
- bitsandbytes
- ftfy
- pydub

View File

@@ -18,6 +18,8 @@ dependencies:
- apispec-webframeworks
- loguru
- termcolor
- Pillow
- diffusers
- psutil
- pip:
- --extra-index-url https://download.pytorch.org/whl/rocm5.1.1
@@ -26,7 +28,12 @@ dependencies:
- flask-cloudflared
- flask-ngrok
- lupa==1.10
- transformers>=4.20.1
- transformers==4.24.0
- huggingface_hub>=0.10.1
- accelerate
- git+https://github.com/VE-FORBRYDERNE/mkultra
- ansi2html
- flask_compress
- ijson
- ftfy
- pydub

View File

@@ -66,6 +66,9 @@ def getdirpath(dir, title):
# Returns the path (as a string) to the given story by its name
#==================================================================#
def storypath(name):
if os.path.exists("stories/{}".format(name)) and os.path.isdir("stories/{}".format(name)):
if os.path.exists("stories/{}/story.json".format(name)):
return "stories/{}/story.json".format(name)
return path.join("stories", name + ".json")
#==================================================================#
@@ -86,7 +89,7 @@ def uspath(filename):
def getstoryfiles():
list = []
for file in listdir("stories"):
if file.endswith(".json") and not file.endswith(".v2.json"):
if file.endswith(".json") and not file.endswith(".v2.json") and not os.path.isdir("stories/{}".format(file.replace(".json", ""))):
ob = {}
ob["name"] = file.replace(".json", "")
f = open("stories/"+file, "r")
@@ -97,12 +100,44 @@ def getstoryfiles():
f.close()
continue
f.close()
try:
ob["actions"] = len(js["actions"])
except TypeError:
print(f"Browser loading error: {file} has incorrect format.")
continue
if 'file_version' in js:
try:
ob["actions"] = int(js["actions"]["action_count"])+1
except TypeError:
print(f"Browser loading error: {file} has incorrect format.")
continue
else:
try:
ob["actions"] = len(js["actions"])
except TypeError:
print(f"Browser loading error: {file} has incorrect format.")
continue
list.append(ob)
elif os.path.isdir("stories/{}".format(file)):
if os.path.exists("stories/{}/story.json".format(file)):
ob = {}
ob["name"] = file
f = open("stories/{}/story.json".format(file), "r")
try:
js = json.load(f)
except:
print(f"Browser loading error: {file} is malformed or not a JSON file.")
f.close()
continue
f.close()
if 'file_version' in js:
try:
ob["actions"] = int(js["actions"]["action_count"])+1
except TypeError:
print(f"Browser loading error: {file} has incorrect format.")
continue
else:
try:
ob["actions"] = len(js["actions"])
except TypeError:
print(f"Browser loading error: {file} has incorrect format.")
continue
list.append(ob)
return list
#==================================================================#
@@ -113,7 +148,7 @@ def checksp(filename: str, model_dimension: int) -> Tuple[Union[zipfile.ZipFile,
if 'np' not in globals():
import numpy as np
try:
z = zipfile.ZipFile("softprompts/"+filename)
z = zipfile.ZipFile(filename)
with z.open('tensor.npy') as f:
# Read only the header of the npy file, for efficiency reasons
version: Tuple[int, int] = np.lib.format.read_magic(f)
@@ -148,7 +183,7 @@ def getspfiles(model_dimension: int):
for file in listdir("softprompts"):
if not file.endswith(".zip"):
continue
z, version, shape, fortran_order, dtype = checksp(file, model_dimension)
z, version, shape, fortran_order, dtype = checksp("./softprompts/"+file, model_dimension)
if z == 1:
logger.warning(f"Softprompt {file} is malformed or not a soft prompt ZIP file.")
continue

View File

@@ -0,0 +1,2 @@
Place the models extracted in their own subfolder.
currently only for stable diffusion and summarization models

View File

@@ -0,0 +1 @@
If you want to use local image generation, you have to download the full stable diffusion model and put all the files here

File diff suppressed because it is too large Load Diff

2633
koboldai_settings.py Normal file

File diff suppressed because it is too large Load Diff

164
presets/Custom.presets Normal file
View File

@@ -0,0 +1,164 @@
[
{
"preset": "Godlike",
"description": "Makes AI give a descriptive and sensual output.",
"Preset Category": "Custom",
"Model Type": "opt",
"Model Size": "13B",
"Model Name": "facebook/opt-13B",
"temp": 0.7,
"genamt": 80,
"top_k": 0,
"top_p": 0.5,
"top_a": 0.75,
"typical": 0.19,
"tfs": 0.97,
"rep_pen": 1.1,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
5,
4,
3,
2,
1,
0
]
},
{
"preset": "Godlike",
"description": "Makes AI give a descriptive and sensual output.",
"Preset Category": "Custom",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.8,
"genamt": 80,
"top_k": 0,
"top_p": 0.5,
"top_a": 0.8,
"typical": 0.19,
"tfs": 0.97,
"rep_pen": 1.1,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
5,
4,
3,
1,
2,
0
]
},
{
"preset": "Light Breeze",
"description": "Staying on track, not complex stories, directing the plot yourself.",
"Preset Category": "Custom",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "KoboldAI/GPT-J-6B-Skein",
"temp": 0.6,
"genamt": 80,
"top_k": 0,
"top_p": 0.85,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.2,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
0,
1,
2,
3,
4,
5
]
},
{
"preset": "Mayday",
"description": "Wacky plot, creativity from AI, crazy stories you want AI to weird out.",
"Preset Category": "Custom",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "KoboldAI/GPT-J-6B-Adventure",
"temp": 1.05,
"genamt": 120,
"top_k": 0,
"top_p": 0.95,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.1,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
0,
1,
2,
3,
4,
5
]
},
{
"preset": "Good Winds",
"description": "Let AI direct the plot, but still stay logical.",
"Preset Category": "Custom",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "KoboldAI/GPT-J-6B-Adventure",
"temp": 0.7,
"genamt": 100,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.9,
"rep_pen": 1.1,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
0,
1,
2,
3,
4,
5
]
},
{
"preset": "Liminal Drift",
"description": "Drives coherent dialogue, responses, and behavior, sometimes surreal situations arise based on information already present in the story.",
"Preset Category": "Custom",
"Model Type": "opt",
"Model Size": "13B",
"Model Name": "facebook/opt-13b",
"temp": 0.66,
"genamt": 80,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.96,
"typical": 0.6,
"tfs": 1.0,
"rep_pen": 1.1,
"rep_pen_range": 1024,
"rep_pen_slope": 0.7,
"sampler_order": [
6,
4,
5,
1,
0,
2,
3
]
}
]

View File

@@ -0,0 +1,272 @@
[
{
"preset": "Genesis",
"description": "Stable and logical, but with scattered creativity.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.63,
"genamt": 50,
"top_k": 0,
"top_p": 0.98,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.98,
"rep_pen": 1.05,
"rep_pen_range": 2048,
"rep_pen_slope": 0.1,
"sampler_order": [
6,
2,
0,
3,
5,
1,
4
]
},
{
"preset": "Basic Coherence",
"description": "Keep things on track.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.59,
"genamt": 50,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.87,
"rep_pen": 1.1,
"rep_pen_range": 2048,
"rep_pen_slope": 0.3,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Ouroboros",
"description": "Versatile, conforms well to poems, lists, chat, etc.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 1.07,
"genamt": 50,
"top_k": 100,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.93,
"rep_pen": 1.05,
"rep_pen_range": 404,
"rep_pen_slope": 0.8,
"sampler_order": [
6,
0,
5,
3,
2,
1,
4
]
},
{
"preset": "Ace of Spades",
"description": "Expressive, while still staying focused.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 1.15,
"genamt": 50,
"top_k": 0,
"top_p": 0.95,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.8,
"rep_pen": 1.05,
"rep_pen_range": 2048,
"rep_pen_slope": 7,
"sampler_order": [
6,
3,
2,
0,
5,
1,
4
]
},
{
"preset": "Moonlit Chronicler",
"description": "Tells a tale with confidence, but variety where it matters.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 1.25,
"genamt": 50,
"top_k": 100,
"top_p": 1.0,
"top_a": 0.78,
"typical": 0.95,
"tfs": 0.8,
"rep_pen": 1.05,
"rep_pen_range": 0,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
0,
4,
1,
3,
5,
2
]
},
{
"preset": "Fandango",
"description": "A rhytmic dance of prose, whoever takes the lead.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.86,
"genamt": 50,
"top_k": 20,
"top_p": 0.95,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.05,
"rep_pen_range": 2048,
"rep_pen_slope": 0.1,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "All-Nighter",
"description": "Creative diction with room for embellishments.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 1.33,
"genamt": 50,
"top_k": 13,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.84,
"rep_pen": 1.05,
"rep_pen_range": 400,
"rep_pen_slope": 0.3,
"sampler_order": [
6,
5,
0,
3,
2,
1,
4
]
},
{
"preset": "Low Rider",
"description": "Reliable, aimed at story development.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.94,
"genamt": 50,
"top_k": 12,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.94,
"rep_pen": 1.05,
"rep_pen_range": 2048,
"rep_pen_slope": 0.2,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Morpho",
"description": "Let the AI generate without constraints.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 0.69,
"genamt": 50,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.00,
"rep_pen": 1.05,
"rep_pen_range": 2048,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Pro Writer",
"description": "Optimal setting for readability, based on AI-powered mass statistical analysis of Euterpe output.",
"Preset Category": "Official",
"Model Type": "xglm",
"Model Size": "13B",
"Model Name": "KoboldAI/fairseq-dense-13B",
"temp": 1.35,
"genamt": 50,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.69,
"rep_pen": 1.15,
"rep_pen_range": 2048,
"rep_pen_slope": 0.1,
"sampler_order": [
6,
3,
2,
5,
0,
1,
4
]
}
]

191
presets/Official_6B.presets Normal file
View File

@@ -0,0 +1,191 @@
[
{
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"genamt": 50,
"rep_pen": 1.1,
"rep_pen_range": 2048,
"rep_pen_slope": 0.2,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
],
"temp": 0.72,
"tfs": 1.0,
"top_a": 0.0,
"top_k": 0,
"top_p": 0.73,
"typical": 1.0,
"preset": "Storywriter",
"description": "Optimized settings for relevant output.",
"Preset Category": "Official"
},
{
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"genamt": 50,
"rep_pen": 1.2,
"rep_pen_range": 2048,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
],
"temp": 0.51,
"tfs": 0.99,
"top_a": 0.0,
"top_k": 0,
"top_p": 1.0,
"typical": 1.0,
"preset": "Coherent Creativity",
"description": "A good balance between coherence, creativity, and quality of prose.",
"Preset Category": "Official"
},
{
"preset": "Luna Moth",
"description": "A great degree of creativity without losing coherency.",
"Preset Category": "Official",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"temp": 2,
"genamt": 50,
"top_k": 85,
"top_p": 0.24,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.1,
"rep_pen_range": 2048,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Sphinx Moth",
"description": "Maximum randomness while still being plot relevant. Like Sphinx riddles!",
"Preset Category": "Official",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"temp": 2,
"genamt": 50,
"top_k": 30,
"top_p": 0.18,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.15,
"rep_pen_range": 2048,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Emperor Moth",
"description": "Medium randomness with a decent bit of creative writing.",
"Preset Category": "Official",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"temp": 1.25,
"genamt": 50,
"top_k": 0,
"top_p": 0.24,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.1,
"rep_pen_range": 2048,
"rep_pen_slope": 0.0,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Best Guess",
"description": "A subtle change with alternative context settings.",
"Preset Category": "Official",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"temp": 0.8,
"genamt": 50,
"top_k": 100,
"top_p": 0.9,
"top_a": 0.0,
"typical": 1.0,
"tfs": 1.0,
"rep_pen": 1.15,
"rep_pen_range": 2048,
"rep_pen_slope": 3.4,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
},
{
"preset": "Pleasing Results",
"description": "Expectable output with alternative context settings.",
"Preset Category": "Official",
"Model Type": "gpt_neo",
"Model Size": "6B",
"Model Name": "EleutherAI/gpt-j-6B",
"temp": 0.44,
"genamt": 50,
"top_k": 0,
"top_p": 1.0,
"top_a": 0.0,
"typical": 1.0,
"tfs": 0.9,
"rep_pen": 1.15,
"rep_pen_range": 2048,
"rep_pen_slope": 6.8,
"sampler_order": [
6,
5,
0,
2,
3,
1,
4
]
}
]

View File

@@ -3,7 +3,7 @@ import os
import sys
import math
import numpy as np
import termcolor
from logger import logger
import contextlib
import traceback
import random
@@ -70,21 +70,24 @@ def patch_transformers_download():
class Send_to_socketio(object):
def write(self, bar):
bar = bar.replace("\r", "").replace("\n", "")
if bar != "":
if bar != "" and [ord(num) for num in bar] != [27, 91, 65]: #No idea why we're getting the 27, 1, 65 character set, just killing to so we can move on
try:
print(bar, end="\r")
if utils.emit is not None:
utils.emit('from_server', {'cmd': 'model_load_status', 'data': bar.replace(" ", "&nbsp;")}, broadcast=True)
print('\r' + bar, end='')
socketio.emit('from_server', {'cmd': 'model_load_status', 'data': bar.replace(" ", "&nbsp;")}, broadcast=True, room="UI_1")
eventlet.sleep(seconds=0)
except:
pass
def flush(self):
pass
def http_get(
url: str,
temp_file: transformers.utils.hub.BinaryIO,
temp_file,
proxies=None,
resume_size=0,
headers: transformers.utils.hub.Optional[transformers.utils.hub.Dict[str, str]] = None,
file_name: transformers.utils.hub.Optional[str] = None,
headers=None,
file_name=None,
):
"""
Download remote file. Do not gobble up errors.
@@ -108,13 +111,18 @@ def patch_transformers_download():
desc=f"Downloading {file_name}" if file_name is not None else "Downloading",
file=Send_to_socketio(),
)
koboldai_vars.status_message = "Download Model"
koboldai_vars.total_download_chunks = total
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
if url[-11:] != 'config.json':
progress.update(len(chunk))
koboldai_vars.downloaded_chunks += len(chunk)
temp_file.write(chunk)
if url[-11:] != 'config.json':
progress.close()
koboldai_vars.status_message = ""
transformers.utils.hub.http_get = http_get
@@ -195,18 +203,18 @@ def device_list(n_layers, primary=None, selected=None):
if(device_count < 2):
primary = None
gpu_blocks = breakmodel.gpu_blocks + (device_count - len(breakmodel.gpu_blocks))*[0]
print(f"{colors.YELLOW} DEVICE ID | LAYERS | DEVICE NAME{colors.END}")
logger.info(" DEVICE ID | LAYERS | DEVICE NAME{colors.END}")
for i in range(device_count):
name = torch.cuda.get_device_name(i)
if(len(name) > 47):
name = "..." + name[-44:]
row_color = colors.END
sep_color = colors.YELLOW
print(f"{row_color}{colors.YELLOW + '->' + row_color if i == selected else ' '} {'(primary)' if i == primary else ' '*9} {i:3} {sep_color}|{row_color} {gpu_blocks[i]:3} {sep_color}|{row_color} {name}{colors.END}")
logger.info(f"{'(primary)' if i == primary else ' '*9} {i:3} | {gpu_blocks[i]:3} | {name}")
row_color = colors.END
sep_color = colors.YELLOW
print(f"{row_color}{colors.YELLOW + '->' + row_color if -1 == selected else ' '} {' '*9} N/A {sep_color}|{row_color} {breakmodel.disk_blocks:3} {sep_color}|{row_color} (Disk cache){colors.END}")
print(f"{row_color} {' '*9} N/A {sep_color}|{row_color} {n_layers:3} {sep_color}|{row_color} (CPU){colors.END}")
logger.info(f" {' '*9} N/A | {breakmodel.disk_blocks:3} | (Disk cache)")
logger.info(f" {' '*9} N/A | {n_layers:3} | (CPU)")
def move_model_to_devices(model, usegpu, gpu_device):
@@ -440,12 +448,12 @@ class TrainerBase(abc.ABC):
@property
def lazy_load_spec(self):
print("WARNING: `TrainerData.lazy_load_spec` is currently unused", file=sys.stderr)
logger.warning("WARNING: `TrainerData.lazy_load_spec` is currently unused")
return self.__lazy_load_spec
@lazy_load_spec.setter
def lazy_load_spec(self, value: Optional[dict]):
print("WARNING: `TrainerData.lazy_load_spec` is currently unused", file=sys.stderr)
logger.warning("WARNING: `TrainerData.lazy_load_spec` is currently unused")
self.__lazy_load_spec = value
@property
@@ -465,7 +473,7 @@ class TrainerBase(abc.ABC):
self.data = self.TrainerData()
self._spmodule: Optional[str] = None
if universe is not None:
print("WARNING: The `universe` argument of `TrainerBase.__init__` is currently unused", file=sys.stderr)
logger.warning("WARNING: The `universe` argument of `TrainerBase.__init__` is currently unused")
def raise_configuration_error(self, msg, **kwargs):
if "quiet" not in kwargs:
@@ -608,14 +616,11 @@ class TrainerBase(abc.ABC):
self.data.params["max_batch_size"] - self.data.soft_in_dim,
)
assert batch_size >= 0
print(
termcolor.colored(
"\nIf you see a warning somewhere below about token indices, ignore it. That warning is normal.\n",
"magenta",
)
logger.info(
"\nIf you see a warning somewhere below about token indices, ignore it. That warning is normal.\n"
)
print("Batch size:", batch_size)
print(termcolor.colored("Tokenizing your dataset...\n", "magenta"))
logger.info("Batch size: {}".format(batch_size))
logger.info("Tokenizing your dataset...\n")
if not isinstance(dataset_path, str):
files = [dataset_path]
@@ -632,7 +637,7 @@ class TrainerBase(abc.ABC):
eos = tokenizer.decode(self.data.params["eos_token"])
for path in files:
if isinstance(path, str):
f = open(path)
f = open(path, 'r', encoding='utf-8')
else:
f = path
try:
@@ -645,7 +650,7 @@ class TrainerBase(abc.ABC):
if isinstance(path, str):
f.close()
print("Dataset size (in tokens):", len(tokens))
logger.info("Dataset size (in tokens): {}".format(len(tokens)))
if len(tokens) < batch_size + 1:
self.raise_configuration_error(
"Your dataset is too small! The number of tokens has to be greater than the batch size. Try increasing the epochs.",
@@ -653,7 +658,7 @@ class TrainerBase(abc.ABC):
)
tail = len(tokens) % (batch_size + 1)
if tail:
print(
logger.info(
f"We're removing the last {tail} tokens from your dataset to make the length a multiple of {batch_size+1}."
)
tokens = tokens[:-tail]
@@ -671,7 +676,7 @@ class TrainerBase(abc.ABC):
axis=0,
)
tokens = tokens[: math.ceil(epochs * sequences_per_epoch)]
print(f"Total sequences in your dataset: {tokens.shape[0]}")
logger.info(f"Total sequences in your dataset: {tokens.shape[0]}")
if isinstance(output_file, str):
f = open(output_file, "w")
@@ -698,7 +703,7 @@ class TrainerBase(abc.ABC):
self.data.params["max_batch_size"] = 2048
if not os.path.exists(self.data.save_file):
print("We are starting a brand new soft-tuning session.\n")
logger.info("We are starting a brand new soft-tuning session.\n")
self.startup(step=-1)
if self.data.soft_in_dim <= 0:
self.raise_configuration_error(
@@ -718,7 +723,7 @@ class TrainerBase(abc.ABC):
opt_state = z["opt_state"]
except AssertionError:
self.raise_configuration_error("MKUSP file is corrupted.", code=14)
print(f"We're resuming a previous soft-tuning session at step {step+1}.\n")
logger.info(f"We're resuming a previous soft-tuning session at step {step+1}.\n")
self.startup(step=step + 1)
soft_embeddings = z["tensor"]
@@ -785,7 +790,7 @@ class TrainerBase(abc.ABC):
num_tensors = len(utils.get_sharded_checkpoint_num_tensors(utils.from_pretrained_model_name, utils.from_pretrained_index_filename, **utils.from_pretrained_kwargs))
else:
num_tensors = len(device_map)
print(flush=True)
#print(flush=True)
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors", file=Send_to_socketio())
with zipfile.ZipFile(f, "r") as z:

View File

@@ -1,2 +1,2 @@
[pytest]
addopts = --ignore=miniconda3 --ignore=runtime --html=unit_test_report.html --self-contained-html -v
addopts = --ignore=miniconda3 --ignore=runtime --html=unit_test_report.html --self-contained-html -vv

View File

@@ -1,4 +1,4 @@
transformers>=4.20.1
transformers==4.24.0
huggingface_hub>=0.10.1
Flask
Flask-SocketIO
@@ -13,9 +13,18 @@ bleach==4.1.0
sentencepiece
protobuf
accelerate
flask-session
flask_session
marshmallow>=3.13
apispec-webframeworks
loguru
termcolor
git+https://github.com/VE-FORBRYDERNE/mkultra
git+https://github.com/VE-FORBRYDERNE/mkultra
Pillow
diffusers
psutil
ansi2html
flask_compress
ijson
bitsandbytes
ftfy
pydub

View File

@@ -5,7 +5,7 @@ requests
dm-haiku == 0.0.5
jax == 0.2.21
jaxlib >= 0.1.69, <= 0.3.7
transformers >= 4.20.1
transformers == 4.24.0
huggingface_hub >= 0.10.1
progressbar2
git+https://github.com/VE-FORBRYDERNE/mesh-transformer-jax@ck
@@ -21,3 +21,11 @@ flask-session
marshmallow>=3.13
apispec-webframeworks
loguru
Pillow
diffusers
psutil
ansi2html
flask_compress
ijson
ftfy
pydub

Binary file not shown.

BIN
static/Welcome_Logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 261 KiB

View File

@@ -12,7 +12,7 @@ var button_newgame;
var button_rndgame;
var button_save;
var button_saveas;
var button_savetofile;
//var button_savetofile;
var button_load;
var button_import;
var button_importwi;
@@ -79,6 +79,7 @@ var rs_close;
var seqselmenu;
var seqselcontents;
var stream_preview;
var stream_preview_text;
var token_prob_container;
var storyname = null;
@@ -2151,6 +2152,7 @@ function endStream() {
if (stream_preview) {
stream_preview.remove();
stream_preview = null;
stream_preview_text = null;
}
}
@@ -2237,7 +2239,7 @@ $(document).ready(function(){
button_rndgame = $('#btn_rndgame');
button_save = $('#btn_save');
button_saveas = $('#btn_saveas');
button_savetofile = $('#btn_savetofile');
//button_savetofile = $('#btn_savetofile');
button_download = $('#btn_download');
button_downloadtxt= $('#btn_downloadtxt');
button_load = $('#btn_load');
@@ -2318,7 +2320,9 @@ $(document).ready(function(){
token_prob_menu = $("#token_prob_menu");
// Connect to SocketIO server
socket = io.connect(window.document.origin, {transports: ['polling', 'websocket'], closeOnBeforeunload: false});
socket = io.connect(window.document.origin, {transports: ['polling', 'websocket'], closeOnBeforeunload: false, query:{"ui": "1"}});
socket.on("message", function(data){show_message(data);});
//console.log(socket);
socket.on('load_popup', function(data){load_popup(data);});
socket.on('popup_items', function(data){popup_items(data);});
socket.on('popup_breadcrumbs', function(data){popup_breadcrumbs(data);});
@@ -2388,10 +2392,14 @@ $(document).ready(function(){
if (!stream_preview && streamingEnabled) {
stream_preview = document.createElement("span");
game_text.append(stream_preview);
stream_preview_text = "";
}
for (const token of msg.data) {
if (streamingEnabled) stream_preview.innerText += token.decoded;
if (streamingEnabled) {
stream_preview_text += token.decoded;
stream_preview.innerText = stream_preview_text;
}
if (probabilitiesEnabled) {
// Probability display
@@ -2455,6 +2463,7 @@ $(document).ready(function(){
all_modified_chunks = new Set();
modified_chunks = new Set();
empty_chunks = new Set();
console.log(msg.data);
game_text.html(msg.data);
if(game_text[0].lastChild !== null && game_text[0].lastChild.tagName === "CHUNK") {
game_text[0].lastChild.appendChild(document.createElement("br"));
@@ -2725,6 +2734,10 @@ $(document).ready(function(){
} else {
token_prob_menu.addClass("hidden");
}
} else if(msg.cmd == "updatealt_text_gen") {
$("#alttextgen").prop('checked', msg.data).change();
} else if(msg.cmd == "updatealt_multi_gen") {
$("#alt_multi_gen").prop('checked', msg.data).change();
} else if(msg.cmd == "allowtoggle") {
// Allow toggle change states to propagate
allowtoggle = msg.data;
@@ -2904,7 +2917,8 @@ $(document).ready(function(){
$("#setfulldeterminism").prop('checked', msg.data).change();
} else if(msg.cmd == "runs_remotely") {
remote = true;
hide([button_savetofile, button_import, button_importwi]);
//hide([button_savetofile, button_import, button_importwi]);
hide([button_import, button_importwi]);
} else if(msg.cmd == "debug_info") {
$("#debuginfo").val(msg.data);
} else if(msg.cmd == "set_debug") {
@@ -3019,8 +3033,9 @@ $(document).ready(function(){
$("#showmodelnamecontainer").addClass("hidden");
$(window).off('beforeunload');
location.reload();
//console.log("Closing window");
console.log("Closing window");
} else if(msg.cmd == 'model_load_status') {
console.log(msg.data);
$("#showmodelnamecontent").html("<div class=\"flex\"><div class=\"loadlistpadding\"></div><div class=\"loadlistitem\" style='align: left'>" + msg.data + "</div></div>");
$("#showmodelnamecontainer").removeClass("hidden");
//console.log(msg.data);
@@ -3186,9 +3201,9 @@ $(document).ready(function(){
socket.send({'cmd': 'memory', 'data': ''});
});
button_savetofile.on("click", function(ev) {
socket.send({'cmd': 'savetofile', 'data': ''});
});
//button_savetofile.on("click", function(ev) {
// socket.send({'cmd': 'savetofile', 'data': ''});
//});
button_loadfrfile.on("click", function(ev) {
if(remote) {
@@ -3492,28 +3507,26 @@ $(document).ready(function(){
// Shortcuts
$(window).keydown(function (ev) {
// Only ctrl prefixed (for now)
if (!ev.ctrlKey) return;
let handled = true;
switch (ev.key) {
// Ctrl+Z - Back
case "z":
button_actback.click();
break;
// Ctrl+Y - Forward
case "y":
button_actfwd.click();
break;
// Ctrl+E - Retry
case "e":
button_actretry.click();
break;
default:
handled = false;
if (ev.altKey)
switch (ev.key) {
// Alt+Z - Back
case "z":
button_actback.click();
break;
// Alt+Y - Forward
case "y":
button_actfwd.click();
break;
// Alt+R - Retry
case "r":
button_actretry.click();
break;
default:
return;
} else {
return;
}
if (handled) ev.preventDefault();
ev.preventDefault();
});
$("#anotetemplate").on("input", function() {
@@ -3796,4 +3809,23 @@ function getSelectedOptions(element) {
output.push(item.value);
}
return output;
}
}
function show_message(data) {
const message_box_data = document.getElementById('message-popup').querySelector("#popup_list_area");
const message_box_title = document.getElementById('message-popup').querySelector("#popup_title");
const message_box_ok = document.getElementById('message-popup').querySelector("#ok");
//clear out the error box
while (message_box_data.firstChild) {
message_box_data.removeChild(message_box_data.firstChild);
}
div = document.createElement('div');
div.innerHTML = data['message'];
div.classList.add('console_text');
message_box_data.append(div);
message_box_title.innerText = data['title'];
message_box_ok.setAttribute("message_id", data['id'])
document.getElementById('message-popup').classList.remove('hidden');
}

BIN
static/default_pfp.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 192 KiB

View File

@@ -61,7 +61,9 @@ var favicon = {
this.run = false;
this.change(fav_icon);
if (typeof submit_start !== 'undefined') {
$("#runtime")[0].innerHTML = `Execution time: ${Math.round((Date.now() - submit_start)/1000)} sec`;
if (document.getElementById("runtime")) {
$("#runtime")[0].innerHTML = `Execution time: ${Math.round((Date.now() - submit_start)/1000)} sec`;
}
delete submit_start;
}
},

3374
static/koboldai.css Normal file

File diff suppressed because it is too large Load Diff

6932
static/koboldai.js Normal file

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

59
static/survey.css Normal file
View File

@@ -0,0 +1,59 @@
:root {
--card-radius: 6px;
}
body {
color: white;
background-color: #111820;
font-family: Arial, Helvetica, sans-serif
}
.hidden {
display: none;
}
#prompt-container {
border: 2px solid #36485e;
border-radius: 8px;
background-color: #273141;
padding: 5px;
padding-top: 1.5em;
}
#prompt-container>.loopy-label {
position: absolute;
top: 0em;
background-color: #273141;
padding: 0px 3px;
border-top: 2px solid #36485e;
}
#question {
text-align: center;
}
.choice {
font-size: 1.2em;
background-color: #273141;
padding: 10px 5px;
margin-bottom: 3px;
cursor: pointer;
}
.meta-choice {
font-style: italic;
}
.meta-choice>.choice-text {
opacity: 0.5;
}
.choice:first-child {
border-top-left-radius: var(--card-radius);
border-top-right-radius: var(--card-radius);
}
.choice:last-child {
border-bottom-left-radius: var(--card-radius);
border-bottom-right-radius: var(--card-radius);
}

26
static/survey.js Normal file
View File

@@ -0,0 +1,26 @@
function $el(selector) { return document.querySelector(selector) }
var socket = io.connect(window.location.origin, { transports: ['polling', 'websocket'], closeOnBeforeunload: false });
var question = $el("#question").value;
var model = $el("#model").value;
function send_results() {
var answer = "";
for (const answerId of ["A", "B", "C", "D", "E", "F"]) {
let checkbox = document.querySelector(`[answer="${answerId}"]`).querySelector("input");
if (checkbox.checked) {
answer = answerId;
break;
}
}
console.log(answer);
socket.emit("answer", { "question": question, "answer": answer, "id": document.getElementById("id").value, "model": model});
}
for (const child of $el("#choices-container").children) {
let radioButton = child.querySelector("input");
child.addEventListener("click", function () { radioButton.click() });
radioButton.addEventListener("click", send_results);
}

View File

View File

@@ -52,7 +52,7 @@
<div class="dropdown-menu">
<a class="dropdown-item" href="#" id="btn_save">Save</a>
<a class="dropdown-item" href="#" id="btn_saveas">Save As</a>
<a class="dropdown-item" href="#" id="btn_savetofile">Save To File...</a>
<!--<a class="dropdown-item" href="#" id="btn_savetofile">Save To File...</a>-->
<a class="dropdown-item always-available" href="#" id="btn_download">Download Story as JSON</a>
<a class="dropdown-item" href="#" id="btn_downloadtxt">Download Story as Plaintext</a>
</div>
@@ -94,6 +94,7 @@
<div id="connectstatusdiv" class="flex-row-container">
<span id="connectstatus" class="color_orange flex-row">Waiting for connection...</span>
<div class="layer-container status-container flex-push-left" style="color: #FFFFFF;" id="runtime"></div>
<span><button class="btn btn-primary" onclick="window.location='/new_ui'">Try New UI</button></span>
<div class="layer-container status-container flex-push-right">
<span class="oi oi-puzzle-piece statusicon layer-bottom" aria-hidden="true">
<div class="statustext statustext-wide">
@@ -495,5 +496,22 @@
</div>
</div>
</div>
<!---------------- message screen ---------------------->
<div id="message-popup" class="popupcontainer hidden">
<div class="new_popup">
<div>
<div class="title" id="popup_title">
Popup Title
</div>
</div>
<div id="popup_list_area" class="popup_list_area" style="overflow-x: scroll;"></div>
<div class="popup_load_cancel" id="popup_load_cancel">
<button id="ok" type="button" class="btn btn-primary popup_load_cancel_button" onclick="document.getElementById('message-popup').classList.add('hidden');socket.emit('check_messages', this.getAttribute('message_id'));">Ok</button>
</div>
</div>
</div>
</div>
</body>
</html>

146
templates/index_new.html Normal file
View File

@@ -0,0 +1,146 @@
<!DOCTYPE html>
<html>
<head>
<script>
// Error handler before all
var debug_info = {errors: []};
window.addEventListener('error', function(event) {
let fileName = event.filename.split("/").splice(-1)[0];
reportError(`Error at ${fileName}:${event.lineno}`, `${event.message}\n--\nPlease report this error to the developers.`);
debug_info.errors.push({msg: event.message, url: event.filename, line: event.lineno});
});
</script>
<!---<script type="module">import {encode, decode} from "./static/tokenizer.js";window.encode = encode;window.decode = decode;</script>--->
<link href="static/open-iconic/css/open-iconic.css" rel="stylesheet">
<link rel="stylesheet" href="static/bootstrap-toggle.min.css">
<link rel="stylesheet" href="static/bootstrap.min.css">
<script src="static/jquery-3.6.0.min.js"></script>
<script src="static/bootstrap-toggle.min.js"></script>
<script src="static/socket.io.min.js"></script>
<title>KoboldAI Client</title>
<meta charset="utf-8">
<link href="themes/Monochrome.css" rel="stylesheet" id="CSSTheme_A">
<link href="static/koboldai.css" rel="stylesheet">
<script defer src="static/koboldai.js"></script>
<script src="static/favicon.js"></script>
<meta name="viewport" content="width=device-width, initial-scale=1">
</head>
<body>
<span class="hidden var_sync_system_on_colab" id="on_colab">false</span>
<!------------ Left Flyout Menu--------------------->
<div id="SideMenu" class="SideMenu pinned">
<!------------ Menu Pin --------------------->
<span id="menu_pin" class="oi menu_pin" onclick="toggle_settings_pin_flyout()" data-glyph="pin"></span>
{% include 'settings flyout.html' %}
</div>
<!------------ Left Menu Icon--------------------->
<div id="setting_menu_icon" class="menu_icon hidden changed" onclick="toggle_flyout(this)">
<div class="menubar1"></div>
<div class="menubar2"></div>
<div class="menubar3"></div>
</div>
<!------------ Main Screen--------------------->
<div id="main-grid" class="main-grid settings_pinned var_sync_alt_model_numseqs" onclick="close_menus();" option_length="0">
<!------------ Game Text Screen--------------------->
<div class="gamescreen" id="gamescreen" context-menu="gamescreen">
<div id="disconnect_message"><center><h1>Disconnected</h1></center></div>
<div id="welcome_container" class="welcome_container">
<div id="welcome_text" class="var_sync_model_welcome" draggable="False"></div>
</div>
<div class="gametext" id="Selected Text" contenteditable=false onblur="select_game_text(null);" onclick="select_game_text(null);" onkeyup="select_game_text(event);">
<span id="story_prompt" class="var_sync_story_prompt var_sync_alt_story_prompt_in_ai rawtext hidden" chunk="-1"></span></div><!--don't move the /div down or it'll cause odd spacing issues in the UI--->
</div>
<!------------ Sequences --------------------->
<div id="action_count" class="var_sync_actions_Action_Count hidden"></div>
<div id="Select Options" class="sequence_area"></div>
<!-- Story Review -->
<div id="story-review" class="hidden">
<img id="story-review-img">
<span id="story-review-author">Bob McBobhead</span>
<span id="story-review-content">Wow, this is a great story. And I mean that. It's positively stelar.</span>
</div>
<!------------ Theme Area--------------------->
<div class="themerow" id="themerow">
<div class="tabrow nomenu_icon">
<span class="prompt_menu selected" id="prompt_menu_random" onclick="this.classList.add('selected');
document.getElementById('prompt_menu_normal').classList.remove('selected');
document.getElementById('random_game_prompt').classList.add('hidden');
document.getElementById('input_text').classList.remove('hidden');">Prompt</span>
<span class="prompt_menu" id="prompt_menu_normal" onclick="this.classList.add('selected');
document.getElementById('prompt_menu_random').classList.remove('selected');
document.getElementById('random_game_prompt').classList.remove('hidden');
document.getElementById('input_text').classList.add('hidden');">Random Prompt</span>
</div>
</div>
<!------------ Input Area--------------------->
<div class="inputrow var_sync_alt_story_storymode" id="inputrow_container">
<div id="random_game_prompt" class="hidden">
<input type="text" autocomplete="off" id="themetext" placeholder="Theme for Random Story" oninput='if (this.value != "") {
document.getElementById("input_text").value = "";}'/>
<span class="help_text" style="margin:0px;margin-top:5px;">The AI can create a prompt for you! Optionally type in one or more themes above, or let the AI do it's thing.</span>
</div>
<button class="var_sync_alt_story_storymode action_button" id="adventure_mode" onclick="toggle_adventure_mode(this);"><span style="font-weight: bold;">Mode: </span><span>Story</span></button>
<textarea autocomplete="off" row=5 id="input_text" placeholder="Enter Prompt Here (shift+enter for new line)" oninput='if (this.value != "") {
document.getElementById("themetext").value = "";
}'
onkeyup="update_token_lengths()"></textarea>
<div class="statusbar_outer hidden var_sync_alt_system_aibusy" id="status_bar" onclick="socket.emit('abort','');">
<div class="statusbar_inner" style="width:0%" onclick="socket.emit('abort','');">
<div id="status_bar_percent">0%</div>
<div class="var_sync_system_status_message" style="width:90px;"></div>
</div>
</div><br>
<div class="statusbar_outer_horde var_sync_alt_system_aibusy var_sync_alt_model_horde_wait_time" id="status_bar_horde">
<div class="statusbar_inner_horde" style="width:100%">
<div>&nbsp;</div>
<div>Queue <span class="var_sync_model_horde_queue_position"></span> of <span class="var_sync_model_horde_queue_size"></span></div>
<div><span class="var_sync_model_horde_wait_time"></span> sec left</div>
</div>
</div><br>
<span class="tts_controls hidden var_sync_alt_story_gen_audio">
<button type="button" class="btn action_button" style="width: 30px; padding: 0px;" onclick='play_pause_tts()' aria-label="play"><span id="play_tts" class="material-icons-outlined" style="font-size: 1.4em;">play_arrow</span></button>
<button type="button" class="btn action_button" style="width: 30px; padding: 0px;" onclick='stop_tts()' aria-label="play"><span id="stop_tts" class="material-icons-outlined" style="font-size: 1.4em;">stop</span></button>
</span>
<button type="button" class="btn action_button submit var_sync_alt_system_aibusy" system_aibusy=False id="btnsubmit" onclick="storySubmit();">Submit</button>
<button type="button" class="btn action_button submited var_sync_alt_system_aibusy" system_aibusy=False id="btnsent"><img id="thinking" src="static/thinking.gif" class="force_center" onclick="socket.emit('abort','');"></button>
<button type="button" class="btn action_button back" onclick="storyBack();" aria-label="undo"><span class="material-icons-outlined" style="font-size: 1.4em;">replay</span></button>
<button type="button" class="btn action_button redo" onclick="storyRedo();" aria-label="redo"><span class="material-icons-outlined" style="font-size: 1.4em;">arrow_forward</span></button>
<button type="button" class="btn action_button retry" onclick="storyRetry();" aria-label="retry"><span class="material-icons-outlined" style="font-size: 1.4em;">autorenew</span></button>
</div>
</div>
<!------------ Right Menu Icon--------------------->
<div id="story_menu_icon" class="right_menu_icon" onclick="toggle_flyout_right(this)">
<div class="menubar1"></div>
<div class="menubar2"></div>
<div class="menubar3"></div>
</div>
<!------------ Right Flyout Menu--------------------->
<div id="rightSideMenu" class="rightSideMenu">
<span id="story_menu_pin" class="oi story_menu_pin" onclick="toggle_story_pin_flyout()" data-glyph="pin"></span>
{% include 'story flyout.html' %}
</div>
<!------------- Pop-Ups ------------------------------->
{% include 'popups.html' %}
<!------------- Templates ------------------------------->
<div class="hidden">
{% include 'templates.html' %}
</div>
<iframe id="download_iframe" style="display:none;"></iframe>
<div id="file-upload-notice" class="hidden">
<span class="material-icons-outlined">upload_file</span>
</div>
</body>
</html>

327
templates/popups.html Normal file
View File

@@ -0,0 +1,327 @@
<div id="popup-container" class="hidden">
<!------------------File Browser--------------->
<div id="file-browser" class="popup-window popup">
<div class="title" id="popup_title">
Popup Title
</div>
<div id="popup_breadcrumbs"></div>
<div id="popup_column_titles"></div>
<div class="popup_list_area" id="popup_list"></div>
<div class="popup_load_cancel hidden" id="popup_upload">
<input type=file id="popup_upload_file">
</div>
<div style="display:flex;justify-content: space-between;">
<span>Drag file(s) above or click here to Upload File<input id="popup_upload_input" type=file onchange="upload_file(this)"></span>
<!---<span>
Upload file without saving to google:
<input type=checkbox id="upload_no_save" class="setting_item_input"
data-size="mini" data-onstyle="success" data-toggle="toggle">
</span>--->
<span>
<button class="settings_button" id="upload_without_save_button" onclick="document.getElementById('upload_without_save').click()">
<span class="material-icons-outlined cursor" tooltip="Load from Local">browser_updated</span>
<span class="button_label">Load from Local</span>
</button>
<input class="hidden" type=file id="upload_without_save" onchange="upload_file_without_save(this)">
<button id="import_story_button" class="settings_button hidden" onclick="openClubImport();">
<span class="material-icons-outlined cursor" tooltip="Import Story">cloud_download</span>
<span class="button_label">Import Story</span>
</button>
</span>
</div>
<div class="popup_load_cancel" id="popup_load_cancel">
<button class="btn popup_load_cancel_button action_button" id="popup_accept" disabled>Load</button>
<button class="btn popup_load_cancel_button" id="popup_cancel" onclick='closePopups();'>Cancel</button>
</div>
</div>
<!---------------- Model Load Screen ---------------------->
<div id="load-model" class="popup-window popup">
<div class="title">
Select A Model To Load
</div>
<div id="loadmodellistbreadcrumbs">
</div>
<div id="loadmodellistcontent" class="popup_list_area"></div>
<div class="popup_load_cancel">
<div>
<input class="hidden fullwidth" type="text" placeholder="key" id="modelkey" onchange="socket.emit('OAI_Key_Update', {'model': document.getElementById('btn_loadmodelaccept').getAttribute('selected_model'), 'key': this.value});">
<input class="hidden fullwidth" type="text" placeholder="Enter the URL of the server (For example a trycloudflare link)" id="modelurl" onchange="check_enable_model_load()">
<input class="hidden fullwidth" type="text" placeholder="Model Path or Hugging Face Name" id="custommodelname" menu="" onblur="socket.send({'cmd': 'selectmodel', 'data': $(this).attr('menu'), 'path_modelname': $('#custommodelname')[0].value});">
<select class="hidden fullwidth settings_select" id="oaimodel"><option value="">Select OAI Model</option></select>
</div>
<div class="hidden" id=modellayers>
<div class="justifyleft">
GPU/Disk Layers<span class="material-icons-outlined helpicon" tooltip="Number of layers to assign to GPUs and to disk cache. Remaining layers will be put into CPU RAM.">help_icon</span>
</div>
<div class="justifyright"><span id="gpu_layers_current">0</span>/<span id="gpu_layers_max">0</span></div>
<div id=model_layer_bars style="color: white"></div>
<input type=hidden id='gpu_count' value=0/>
</div>
<div class="box flex-push-right hidden" id=use_gpu_div>
<input type="checkbox" data-toggle="toggle" data-onstyle="success" id="use_gpu" checked>
<div class="box-label">Use GPU</div>
</div>
<div class="box flex-push-right hidden" id=use_8_bit_div>
<input type="checkbox" data-toggle="toggle" data-onstyle="success" id="use_8_bit" checked>
<div class="box-label">Use 8 bit mode</div>
</div>
<button type="button" class="btn popup_load_cancel_button action_button disabled" onclick="load_model()" id="btn_loadmodelaccept" disabled>Load</button>
<button type="button" class="btn popup_load_cancel_button" onclick='closePopups();' id="btn_loadmodelclose">Cancel</button>
</div>
</div>
<!---------------- Story overwrite screen ---------------------->
<div id="save-confirm" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Overwrite</div>
</div>
<div id="popup_list_area" class="popup_list_area">
The story name you have entered already exists. Would you like to overwrite?
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick='socket.emit("save_story", "overwrite");closePopups();'>Overwrite</button>
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Cancel</button>
</div>
</div>
<!---------------- Private Mode Unlock screen ---------------------->
<div id="privacy_mode" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Locked</div>
</div>
<div id="popup_list_area" class="popup_list_area">
This story is in private mode. Please enter password to unlock<br/>
<input type="password" id="privacy_password"/>
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick='socket.emit("privacy_mode", {"enabled": false, "password": document.getElementById("privacy_password").value});'>Unlock</button>
</div>
</div>
<!---------------- Save Preset screen ---------------------->
<div id="save-preset" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Save Preset</div>
</div>
<div id="popup_list_area" class="popup_list_area">
<table>
<tr>
<td>
Name:
</td>
<td>
<input type="text" id="new_preset_name"/>
</td>
</tr>
<tr>
<td>
Description:
</td>
<td>
<input type="text" id="new_preset_description"/>
</td>
</tr>
</table>
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick='save_preset()'>Save</button>
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Cancel</button>
</div>
</div>
<!---------------- Import aidg.club Prompt ---------------------->
<div id="aidg-import-popup" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Enter the Prompt Number</div>
</div>
<div class="popup_list_area">
<br/>
<div style="text-align: center;"><a href="https://aetherroom.club/" target="_blank" rel="noopener noreferrer">https://aetherroom.club/</a></div>
<br/>
<input autocomplete="off" class="form-control focus-on-me" type="text" placeholder="Prompt ID or URL" id="aidgpromptnum">
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="attemptClubLoad();">Accept</button>
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Cancel</button>
</div>
</div>
<!---------------- error screen ---------------------->
<div id="error-popup" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Error</div>
</div>
<div id="popup_list_area" class="popup_list_area" style="overflow-x: scroll;">
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Ok</button>
</div>
</div>
<!---------------- message screen ---------------------->
<div id="message-popup" class="popup-window popup">
<div class="title">
<div class="popuptitletext" id="popup_title"></div>
</div>
<div id="popup_list_area" class="popup_list_area" style="overflow-x: scroll;">
</div>
<div class="popup_load_cancel">
<button id="ok" type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();socket.emit('check_messages', this.getAttribute('message_id'));">Ok</button>
</div>
</div>
message-popup
<!---------------- log screen ---------------------->
<div id="log-popup" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Log</div>
</div>
<div id="popup_list_area" class="popup_list_area" style="overflow-x: scroll;">
</div>
<div class="popup_load_cancel">
<span class="cursor debug-dump-log" onclick="openPopup('debug-file-prompt');">Download debug dump</span>
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Ok</button>
</div>
</div>
<!---------------- Advanced Theme Editor ---------------------->
<div id="advanced_theme_editor" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Advanced Theme Editor</div>
</div>
<div id="popup_list_area" class="popup_list_area">
<table border=1 id="advanced_theme_editor_table"></table>
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Ok</button>
</div>
</div>
<!---------------- Context Viewer ---------------------->
<div id="context-viewer" class="popup-window">
<div id="context-viewer-header" class="popup-header">
<h3 class="noselect">Context Viewer</h3>
<div id="context-viewer-header-right">
<div>
<span class="noselect">Key:</span>
<div>
<span class="noselect context-block-example context-sp">Soft Prompt</span>
<span class="noselect context-block-example context-genre">Genre</span>
<span class="noselect context-block-example context-memory">Memory</span>
<span class="noselect context-block-example context-wi">World Info</span>
<span class="noselect context-block-example context-an">Author's Note</span>
<span class="noselect context-block-example context-prompt">Prompt</span>
<span class="noselect context-block-example context-action">Action</span>
<span class="noselect context-block-example context-submit">Submit</span>
</div>
</div>
<span id="context-viewer-close" class="material-icons-outlined" onclick='closePopups();'>close</span>
</div>
</div>
<span class="help_text">
Context is the text the AI is sent when you ask it to generate text.
As the context is limited in size, you can use the Context Viewer to check if things you want the AI to know are in the context.
</span>
<div id="context-container"></div>
</div>
<!---------------- Finder ---------------------->
<div id="finder" class="popup-window">
<div id="finder-bar">
<input id="finder-input" placeholder="Search for something..."></input>
<span id="finder-icon" class="material-icons-outlined">search</span>
</div>
</div>
<div id="finder-wi-carousel" class="hidden"></div>
<div id="finder-scratchpad" class="hidden">
<span id="finder-scratchpad-info" class="noselect">AI Output:</span>
<span id="finder-scratchpad-prompt">
A man with 20 fingers is
</span>
<span id="finder-scratchpad-response">
a threat to society
</span>
</div>
<!---------------- Debug File ---------------------->
<div id="debug-file-prompt" class="popup-window">
<div id="redacted" onclick="downloadDebugFile(true);">
<span class="focus">Download debug file</span>
<span class="help_text">Sensitive information, including any story data, is redacted.</span>
</div>
<div id="unredacted" onclick="downloadDebugFile(false);">
<span class="focus">Download unredacted debug file</span>
<span class="help_text">
Extremely sensitive information, such as API keys or information about other stories, is still redacted.
<br><b>This file will contain your story.</b>
</span>
</div>
</div>
<!---------------- Prompt Config ------------------->
<div id="prompt-config" class="popup-window">
<div id="prompt-config-header">
<h3 class="noselect">Prompt Configuration</h3>
<span class="help_text">This prompt has placeholders you need to fill in before starting.</span>
</div>
<!-- Order, default, title, description -->
<div id="prompt-config-placeholders"></div>
<div id="prompt-config-done" onclick="sendPromptConfiguration();">Done</div>
</div>
<!---------------- Shortcuts ------------------->
<div id="shortcuts-popup" class="popup-window">
<div class="popup-header">
<h2>Shortcuts</h2>
<span class="popup-close material-icons-outlined" onclick='closePopups();'>close</span>
</div>
<div id="shortcut-container"></div>
</div>
<!---------------- Softprompt Trainer ------------------->
<div id="sp-trainer-popup" class="popup-window popup">
<div class="title">
<div class="popuptitletext">Softprompt Trainer</div>
</div>
<div id="popup_list_area" class="popup_list_area" style="overflow-x: scroll;">
<form action="sp_tunning" method="post">
Soft Prompt Title: <input type=text id="sp_title"/><br/>
Initial Prompt: <input type=text id="sp_prompt"/><br/>
Dataset Location: <input type=text id="sp_dataset"/><br/>
Author: <input type=text id="sp_author"/><br/>
Description: <input type=text id="sp_description"/><br/>
</form>
</div>
<div class="popup_load_cancel">
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="create_new_softprompt();">Ok</button>
<button type="button" class="btn btn-primary popup_load_cancel_button" onclick="closePopups();">Cancel</button>
</div>
</div>
<!-- WI deletion confirmation -->
<div id="confirm-delete-dialog">
<span id="confirm-text">You're about to delete World Info folder <b>evil and malice</b> and the <b>infinite</b> entries inside it. Are you sure?</span>
<div id="confirm-buttons">
<div id="confirm-deny-button" class="confirm-button">
<span class="material-icons-outlined">close</span>
<span class="text">I've changed my mind!</span>
</div>
<div id="confirm-confirm-button" class="confirm-button">
<span class="material-icons-outlined">delete</span>
<span class="text">Go for it.</span>
</div>
</div>
</div>
<!-- Big Image -->
<img id="big-image"></img>
</div>
<div id="notification-container"></div>

View File

@@ -0,0 +1,483 @@
<style>
#Model_Info {
width: 100%;
}
#Story_Info {
width: 100%;
}
#save_story[story_gamesaved="true"] {
text: var(--disabled_button_text);
background-color: var(--disabled_button_background_color);
border-color: var(--disabled_button_border_color);
filter: brightness(85%);
}
</style>
<!-- top menu bar-->
<div class="menu_pin_area"></div>
<div class="tabrow">
<span id="settings_flyout_tab_home" class="setting_menu_button tab tab-settings selected" tab-target="setting_menu_home" onclick="selectTab(this);">Home</span>
<span id="settings_flyout_tab_settings" class="setting_menu_button tab tab-settings" tab-target="setting_menu_settings" onclick="selectTab(this);">Settings</span>
<span id="settings_flyout_tab_interface" class="setting_menu_button tab tab-settings" tab-target="setting_menu_interface" onclick="selectTab(this);">Interface</span>
<span style="float: right;margin-right: 30px;padding: 0px 10px;" onclick="window.open('https://github.com/KoboldAI/KoboldAI-Client/wiki');">
Help
<icon class="material-icons-outlined" style="font-size:14px;position:relative;top:2px;">open_in_new</icon>
</span>
</div>
<span class="material-icons-outlined cursor search_icon" tooltip="Finder" onclick="open_finder();">search</span>
<div class="flyout_menu_contents">
<div id="setting_menu_home" class="settings_category_area tab-target tab-target-settings">
<div class="Model_Info">
<div id="model_title">
<span>
<span class="var_sync_model_model">ReadOnly</span>
</span>
</div>
<div id="text_runningmodel">
<b class="noselect">1) Model: </b>
</div>
<div style="text-align: center;">
<button class="settings_button" onclick="socket.emit('load_model_button', {});">
<span class="material-icons-outlined cursor" tooltip="Load Model" style="font-size: 1.4em;">folder_open</span>
<span class="button_label">Load Model</span>
</button>
<select class="var_sync_model_selected_preset settings_select presets" onchange='sync_to_server(this)'><option>Preset</option></select>
{% if not on_colab %}
<div class="horde_trigger var_sync_alt_model_model">
<input type=checkbox data-size="mini" data-onstyle="success" data-toggle="toggle" class='var_sync_system_horde_share' onchange='sync_to_server(this)'> Share with Horde
<span class="helpicon material-icons-outlined" tooltip="Shares your GPU with other KoboldAI users. Does not share data/stories.">help_icon</span>
</div>
{% endif %}
</div>
</div>
<div id="Story_Info">
<hr/>
<div class="story_title_area">
<div class="story_title">
<span>
<span class="var_sync_story_story_name fullwidth" contenteditable=true onblur="sync_to_server(this);"></span>
</span>
<span>
<span class="material-icons-outlined cursor" style="padding-top: 8px;" tooltip="Download Story" onclick="download_story();">file_download</span>
</span>
</div>
<div id="text_storyname">
<b class="noselect">2) Story: </b>
</div>
<div class="story_title_icons">
<button class="settings_button" onclick="socket.emit('new_story', '');">
<span class="material-icons-outlined cursor" tooltip="New Story">description</span>
<span class="button_label">New Story</span>
</button>
<button class="settings_button" onclick="load_story_list();">
<span class="material-icons-outlined cursor" tooltip="Load Story">folder_open</span>
<span class="button_label">Load Story</span>
</button>
<button class="settings_button var_sync_alt_story_gamesaved" onclick='save_story();'>
<span class="material-icons-outlined cursor var_sync_alt_story_gamesaved" tooltip="Save Story">save</span>
<span class="button_label">Save Story</span>
</button>
</div>
</div>
<hr/>
</div>
<div id="tts" class="var_sync_alt_system_experimental_features">
<audio id="reader" preload=none src="/audio" onended="finished_tts()" onplay="tts_playing()" />
</div>
<div class="setting_tile_area">
{% with menu='Home' %}
{% with sub_path='' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
<div class="setting_container chat_mode var_sync_alt_story_chatmode" ui_level=0>
<!---Top Row---->
<span class="setting_label">
<span style="white-space: pre-wrap;">Chat Name: </span>
<span class="helpicon material-icons-outlined" tooltip="Your name for chat mode.">help_icon</span>
</span>
<!---Bottom Row---->
<span class="setting_item" style="height: 25px;">
<input autocomplete="off" id="var_sync_story_chatname" class="var_sync_story_chatname settings_select" onclick="sync_to_server(this);">
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="top: -4px; position: relative;"></span></span>
<span class="setting_maxlabel"><span style="top: -4px; position: relative;"></span></span>
</div>
</div>
<span id="debug-dump" class="cursor" onclick="openPopup('debug-file-prompt');">Download debug dump</span>
<div id="Images">
<button id="generate-image-button" class="settings_button var_sync_alt_system_generating_image">Generate Image</button>
<div id="action image">
<div id="image-loading" class="hidden">
<div class="spinner material-icons-outlined">autorenew</div>
</div>
</div>
</div>
</div>
<div id="setting_menu_settings" class="hidden settings_category_area tab-target tab-target-settings">
<div class="preset_area">
<button class="settings_button" onclick="show_save_preset();">
<span class="material-icons-outlined cursor" tooltip="Save Preset">save</span>
<span class="button_label">Save Preset</span>
</button>
<select class="var_sync_model_selected_preset settings_select presets" onchange='sync_to_server(this)'><option>Preset</option></select>
</div>
{% with menu='Settings' %}
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Generation</h4>
</div>
<div class="setting_tile_area">
{% with sub_path='Generation' %}
{% include 'settings item.html' %}
{% endwith %}
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Sampling</h4>
</div>
<div class="setting_tile_area">
<div class="help_text">Change how the AI decides what to say.</div>
{% with sub_path='Sampling' %}
{% include 'settings item.html' %}
{% endwith %}
<div class="setting_container_single" ui_level=2>
<!---Top Row---->
<span class="setting_label">
<span>Samplers Order:&nbsp;</span><span class="helpicon material-icons-outlined" tooltip="Changes the order of the samplers to have a considerably different effect than just leaving the samplers at their default order.">help_icon</span>
</span>
<!---Bottom Row---->
<span class="setting_item">
<div style="display:flex;flex-direction:row;">
<ul id="sample_order_list" style="width:calc(var(--flyout_menu_width) - 60px);list-style-position: inside; padding: 0;">
<li class="sample_order cursor" onclick="select_sample(this);">Repetition Penalty</li>
<li class="sample_order cursor" onclick="select_sample(this);">Top K Sampling</li>
<li class="sample_order cursor" onclick="select_sample(this);">Top A Sampling</li>
<li class="sample_order cursor" onclick="select_sample(this);">Top P Sampling</li>
<li class="sample_order cursor" onclick="select_sample(this);">Tail Free Sampling</li>
<li class="sample_order cursor" onclick="select_sample(this);">Typical Sampling</li>
<li class="sample_order cursor" onclick="select_sample(this);">Temperature</li>
</ul>
<div style="display:flex;flex-direction:column;margin-top: 25px;">
<div class="material-icons-outlined cursor" onclick="move_sample('up');">arrow_upward</div>
<div class="material-icons-outlined cursor" onclick="move_sample('down');">arrow_downward</div>
</div>
</div>
</span>
</div>
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Repetition</h4>
</div>
<div class="setting_tile_area">
<div class="help_text">Change how the AI combats repetition.</div>
{% with sub_path='Repetition' %}
{% include 'settings item.html' %}
{% endwith %}
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Other</h4>
</div>
<div class="setting_tile_area">
{% with sub_path='Other' %}
{% include 'settings item.html' %}
{% endwith %}
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Modifiers</h4>
</div>
<div class="setting_tile_area">
{% with sub_path='Modifiers' %}
{% include 'settings item.html' %}
{% endwith %}
<div class="setting_container_sp" ui_level=1>
<!---Top Row---->
<span class="setting_label">
<span style="white-space: pre-wrap;">Soft Prompt: </span>
<span class="helpicon material-icons-outlined" tooltip="Changes how the AI decides what to generate.">help_icon</span>
</span>
<!---Bottom Row---->
<span class="setting_item">
<div style="display: flex;flex-direction: row;">
<select autocomplete="off" id="sp" class="var_sync_system_splist var_sync_system_spfilename settings_select" style="width: 140px;margin-right:0px;padding-bottom: 0px;" onclick="socket.emit('load_softprompt', this.value);">
</select>
<div style="display: flex;flex-direction: column;">
<span class="material-icons-outlined cursor" style="font-size: 18px;" tooltip="Refresh List" onclick="socket.emit('sp_list_refresh', '');">autorenew</span>
<span class="material-icons-outlined cursor var_sync_alt_system_experimental_features" style="font-size: 18px;" tooltip="Create New Soft Prompt" onclick="openPopup('sp-trainer-popup');">note_add</span>
</div>
</div>
</span>
</div>
<div class="setting_container_single_wide" ui_level=2>
<!---Top Row---->
<span class="setting_label">
<span style="white-space: pre-wrap;">Enabled Userscripts: </span>
<span class="helpicon material-icons-outlined" tooltip="Allows to activate userscripts that modify the game's functionality.">help_icon</span>
</span>
<!---Bottom Row---->
<span class="setting_item">
<div style="display: flex;flex-direction: row;">
<select id="loaded_userscripts" class="var_sync_system_userscripts settings_select" multiple style="width:300px;"></select>
<div style="display: flex;flex-direction: column;">
<span class="material-icons-outlined cursor" style="font-size: 24px;" tooltip="Unload File(s)" onclick="unload_userscripts();">delete</span>
<span class="material-icons-outlined cursor" style="font-size: 24px;" tooltip="Load File" onclick="socket.emit('load_userscripts_list', '');">upload_file</span>
</div>
</div>
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="top: -4px; position: relative;"></span></span>
<span class="setting_maxlabel"><span style="top: -4px; position: relative;"></span></span>
</div>
</div>
{% endwith %}
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Biasing</h4>
</div>
<div id="biasing" class="biasing" ui_level=1>
<div class="help_text">Influence the likelihood for the AI to output certain phrases.</div>
<div class="bias_header">
<div class="bias_header_phrase">Phrase</div>
<div class="bias_header_score">Score</div>
<div class="bias_header_comp_threshold">
Completion Threshold
<span class="helpicon material-icons-outlined" tooltip="Amount of tokens that must match the phrase before it is force-completed.">help_icon</span>
</div>
</div>
</div>
<!-- Story Review Bot -->
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width: var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Story Commentary</h4>
</div>
<div id="story-commentary" class="story-commentary setting_tile_area" ui_level=1>
<div class="help_text">Allow custom characters to comment on the story.</div>
<div id="story-commentary-enable" class="wide-boolean-setting">
<span>Enable Story Commentary</span>
<input type=checkbox class="setting_item_input" data-size="mini" data-onstyle="success" data-toggle="toggle">
</div>
<div id="story-commentary-settings">
<div class="dynamic-setting-container">
<div class="dynamic-setting-top">
<span class="setting_label"><span>Commentary Chance Percent</span></span>
<input sync-proxy-host="story.commentary_chance" class="setting_value" inputmode="numeric" value="0">
</div>
<span class="setting_item">
<input type="range" sync-var="story.commentary_chance" class="setting_item_input" min="0" max="100" step="1" value="0">
</span>
<div class="dynamic-setting-bounds">
<span class="setting_minlabel">0</span>
<span class="setting_maxlabel">100</span>
</div>
</div>
</div>
</div>
</div>
<div id="setting_menu_interface" class="hidden settings_category_area tab-target tab-target-settings">
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> UI</h4>
</div>
<div class="setting_tile_area">
{% with menu='Interface' %}
{% with sub_path='UI' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
<div class="setting_container" ui_level=1>
<span class="setting_label">
<span>Max Game Screen: &nbsp;</span><span class="helpicon material-icons-outlined" tooltip="If enabled and both menus are unpinned, the game screen will take up all available space. When disabled, the game screen will be centered.">help_icon</span>
</span>
<span class="setting_item">
<input type=checkbox id="preserve_game_space_setting" data-size=mini data-onstyle=success data-toggle=toggle onchange="preserve_game_space(this.checked)"/>
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="top: -4px; position: relative;"></span></span>
<span class="setting_maxlabel"><span style="top: -4px; position: relative;"></span></span>
</div>
<div class="setting_container" ui_level=1>
<span class="setting_label">
<span>Options on Right:&nbsp;</span><span class="helpicon material-icons-outlined" tooltip="If enabled and only the right menu is pinned, the options will be shown on the right instead of the left of the game text.">help_icon</span>
</span>
<span class="setting_item">
<input type=checkbox id="options_on_right" data-size=mini data-onstyle=success data-toggle=toggle onchange="options_on_right(this.checked)"/>
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="top: -4px; position: relative;"></span></span>
<span class="setting_maxlabel"><span style="top: -4px; position: relative;"></span></span>
</div>
<div class="setting_container" ui_level=1>
<!---Top Row---->
<span class="setting_label">
<span>Game Text Size:&nbsp;</span><span class="helpicon material-icons-outlined" tooltip="Changes the font size of the game text.">help_icon</span>
</span>
<input autocomplete="off" class="setting_value" id="font_size_cur"
value="1" item_id="font_size"
inputmode="decimal"
onchange="document.getElementById(this.getAttribute('item_id')).value = this.value; set_font_size(this)">
<!---Bottom Row---->
<span class="setting_item">
<input type="range" min="0.1" max="2" step="0.05"
value="1" id="font_size" class="setting_item_input "
oninput="document.getElementById(this.id+'_cur').value = this.value;set_font_size(this);"
onclick="">
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="position: relative;">0.1</span></span>
<span class="setting_maxlabel"><span style="position: relative;">2</span></span>
</div>
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Formatting</h4>
</div>
<div class="setting_tile_area">
{% with menu='Interface' %}
{% with sub_path='Formatting' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Substitutions</h4>
</div>
<div>
<div class="setting_tile_area setting_container_single_wide" id="Substitutions" ui_level=2>
<div class="help_text">
Automatically replaces phrases that you or the AI insert.
<span class="helpicon material-icons-outlined" tooltip="Can be used to help you insert special characters or automatically correct the AI. The pencil button toggles if a substitution is active or not.">help_icon</span>
</div>
<div id="substitution-header" class="noselect">
<b>Replace</b> <b>With</b>
</div>
<div id="substitution-container"></div>
<div id="new-sub-card" class="cursor" tooltip="Add Substitution">
<span class="material-icons-outlined">
add
</span>
</div>
</div>
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Images</h4>
</div>
<div class="setting_tile_area">
{% with menu='Interface' %}
{% with sub_path='Images' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">expand_more</span> Theme</h4>
</div>
<div class="setting_tile_area" id="Theme">
<select id="selected_theme" class="var_sync_system_theme_list" autocomplete="off" onchange="Change_Theme(this.value);">
</select><span class="material-icons-outlined cursor" tooltip="Refresh List" onclick="socket.emit('theme_list_refresh', '');">autorenew</span>
<div id="palette_area" class="palette_area" ui_level=2>
<b style="font-size: 20px;">Palette</b>
<div id="save_theme_area" >
<input type="text" id="save_theme_name" autocomplete="off" placeholder="New Theme Name"/>
<span class="material-icons-outlined cursor" tooltip="Save Theme" onclick='save_theme();'>save</span>
</div>
<div class="setting_tile_area" id="Palette">
<table id="Palette_Table" border=1 style="border-color: var(--palette_table_border);">
<tr>
<td style="border-top-color: transparent; border-left-color: transparent; border-bottom-color: transparent;"></td>
<td colspan=2><b>Main</b></td>
<td colspan=2><b>Alternative</b></td>
</tr>
<tr>
<td style="border-top-color: transparent; border-left-color: transparent;"></td>
<td>Background</td>
<td>Text</td>
<td>Background</td>
<td>Text</td>
</tr>
<tr>
<td>Primary</td>
<td><input class="Theme_Input" autocomplete="off" type=color id="primary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_primary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="primary_container_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_primary_container_palette" onchange="palette_color(this)"></td>
</tr>
<tr>
<td>Secondary</td>
<td><input class="Theme_Input" autocomplete="off" type=color id="secondary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_secondary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="secondary_container_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_secondary_container_palette" onchange="palette_color(this)"></td>
</tr>
<tr>
<td>Tertiary</td>
<td><input class="Theme_Input" autocomplete="off" type=color id="tertiary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_tertiary_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="tertiary_container_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="on_tertiary_container_palette" onchange="palette_color(this)"></td>
</tr>
</table>
<table id="Palette_Table" border=1 style="border-color: var(--palette_table_border);">
<tr>
<td colspan=5 style="text-align: center;">Backgrounds</td>
</tr>
<tr>
<td>Base</td>
<td>Layer 1</td>
<td>Layer 2</td>
<td>Layer 3</td>
<td>Layer 4</td>
</tr>
<tr>
<td><input class="Theme_Input" autocomplete="off" type=color id="background_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="layer1_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="layer2_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="layer3_palette" onchange="palette_color(this)"></td>
<td><input class="Theme_Input" autocomplete="off" type=color id="layer4_palette" onchange="palette_color(this)"></td>
</tr>
</table>
</div>
</div>
<div class="advanced_theme cursor" onclick='openPopup("advanced_theme_editor");'>
<span>Advanced Theme</h4>
</div>
</div>
<div class="collapsable_header" onclick="toggle_setting_category(this);">
<h4 style="width:var(--flyout_menu_width);"><span class="material-icons-outlined cursor">navigate_next</span> Tweaks</h4>
</div>
<div class="setting_tile_area hidden" id="Tweaks">
<div class="help_text">Small UI changes that can be mixed and matched.</div>
<div class="wide-boolean-setting tweak-container" tweak-path="hide-timing">
<span>Hide timing information</span>
<input type=checkbox class="setting_item_input" data-size="mini" data-onstyle="success" data-toggle="toggle">
</div>
<div class="wide-boolean-setting tweak-container" tweak-path="hide-token-bar">
<span>Hide token bar</span>
<input type=checkbox class="setting_item_input" data-size="mini" data-onstyle="success" data-toggle="toggle">
</div>
<div class="wide-boolean-setting tweak-container" tweak-path="hide-max-length">
<span>Hide text highlighting</span>
<input type=checkbox class="setting_item_input" data-size="mini" data-onstyle="success" data-toggle="toggle">
</div>
<div class="wide-boolean-setting tweak-container" tweak-path="hide-welcome-logo">
<span>Hide welcome logo</span>
<input type=checkbox class="setting_item_input" data-size="mini" data-onstyle="success" data-toggle="toggle">
</div>
</div>
</div>
<div id="settings_footer" class="settings_footer">
<span>Execution Time: <span id="Execution Time"></span></span> |
<span>Remaining Time: <span class="var_sync_model_tqdm_rem_time"></span></span> |
<a onclick='socket.emit("get_log", {});openPopup("log-popup");' class='cursor'>Log</a>
</div>
</div>

View File

@@ -0,0 +1,43 @@
{% for item in settings %}
{% if item["menu_path"] == menu and item['sub_path'] == sub_path %}
{% if 'extra_classes' in item %}
<div id="{{ item['name'] }}_card" class="setting_container {{ item['extra_classes'] }}" ui_level="{{ item['ui_level'] }}">
{% else %}
<div id="{{ item['name'] }}_card" class="setting_container" ui_level="{{ item['ui_level'] }}">
{% endif %}
<!---Top Row---->
<span class="setting_label">
<span>{{ item['label'] }}:&nbsp;</span><span class="helpicon material-icons-outlined" tooltip="{{ item['tooltip'] }}">help_icon</span>
</span>
{% if (item['unit'] != 'bool') and (item['unit'] != 'text') and (item['uitype'] != 'dropdown') %}
<input autocomplete="off" class="setting_value var_sync_{{ item['classname'] }}_{{ item['name'] }}" id="{{ item['classname'] }}_{{ item['name'] }}_cur"
value="{{ item['default'] }}" item_id="{{ item['classname'] }}_{{ item['name'] }}"
{% if item['unit'] == 'float' %} inputmode="decimal"{% elif item['unit'] == 'float' %} inputmode="numeric"{% endif %}
onchange="document.getElementById(this.getAttribute('item_id')).value = this.value; socket.emit('var_change', {'ID': this.id.replace('_cur', ''), 'value': this.value});">
{% endif %}
<!---Bottom Row---->
<span class="setting_item">
{% if item["uitype"] == "slider" %}
<input type="range" min="{{ item['min'] }}" max="{{ item['max'] }}" step="{{ item['step'] }}"
value="{{ item['default'] }}" id="{{ item['classname'] }}_{{ item['name'] }}" class="setting_item_input var_sync_{{ item['classname'] }}_{{ item['name'] }}"
oninput="document.getElementById(this.id+'_cur').value = this.value;"
onclick="sync_to_server(this);">
{% elif item["uitype"] == "toggle" %}
<input type=checkbox id="{{ item['classname'] }}_{{ item['name'] }}" class="setting_item_input var_sync_{{ item['classname'] }}_{{ item['name'] }}"
data-size="mini" data-onstyle="success" data-toggle="toggle" onchange='sync_to_server(this);'>
{% elif item['uitype'] == "dropdown" %}
<select id="{{ item['classname'] }}_{{ item['name'] }}" class="settings_select var_sync_{{ item['classname'] }}_{{ item['name'] }}" onchange='sync_to_server(this);'>
{% for option in item['children'] %}
<option value="{{ option['value'] }}">{{ option["text"] }}</option>
{% endfor %}
</select>
{% elif item['uitype'] == "text" %}
<input id="{{ item['classname'] }}_{{ item['name'] }}" class="settings_select var_sync_{{ item['classname'] }}_{{ item['name'] }}" onchange='sync_to_server(this);' value='{{ item['default'] }}'>
{% endif %}
</span>
<!---Slider Labels--->
<span class="setting_minlabel"><span style="position: relative;">{% if item["uitype"] == "slider" %}{{ item['min'] }}{% endif %}</span></span>
<span class="setting_maxlabel"><span style="position: relative;">{% if item["uitype"] == "slider" %}{{ item['max'] }}{% endif %}</span></span>
</div>
{% endif %}
{% endfor %}

138
templates/story flyout.html Normal file
View File

@@ -0,0 +1,138 @@
<div class="tabrow nomenu_icon">
<span id="story_flyout_tab_memory" class="story_menu_button tab tab-story selected" tab-target="story_menu_memory" onclick="selectTab(this);">Memory</span>
<span id="story_flyout_tab_author" class="story_menu_button tab tab-story" tab-target="story_menu_author" onclick="selectTab(this);">Author's Note</span>
<span id="story_flyout_tab_notes" class="story_menu_button tab tab-story" tab-target="story_menu_notes" onclick="selectTab(this);">Notes</span>
<span id="story_flyout_tab_wi" class="story_menu_button tab tab-story" tab-target="story_menu_wi" onclick="selectTab(this);">World Info</span>
</div>
<div class="flyout_menu_contents">
<div id="story_menu_memory" class="story_category_area tab-target tab-target-story">
<div id="Memory">
<h4 class="section_header"><label for="memory">Memory</label></h4>
<div class="help_text">
Important information the AI should always keep in mind.
</div>
<textarea rows=20 id="memory" class="var_sync_story_memory var_sync_alt_story_memory_length fullwidth" onchange='sync_to_server(this);' oninput="autoResize(this)" autocomplete="off"></textarea>
<div id="Mem-Attention-Bias" class="var_sync_alt_system_experimental_features">
<h4 class="section_header"><label for="Mem-Attention-Bias">Attention Bias Test</label></h4>
<span class="help_text">
<b>Note: This only works on OPT models for now! Patches will be written for other models once it's known this actually has a positive effect. Upon first use of this bias, you should see "Applying attention bias" in the console.</b><br>
This setting <i>may</i> change how the AI pays attention to memory. Any high number in the ballpark of 15 may cause incoherence. The option to select higher numbers is present for experimentation.
</span>
<input type="range" oninput="memAttnInput(this.value);" onchange="sync_to_server(this);" class="var_sync_story_memory_attn_bias sync_as_float" value="1" min="1" max="50" step="0.1"></input>
<span id="memattnbiaslabel">1</span>
<script>
const memAttn = document.querySelector("#memattnbiaslabel");
function memAttnInput(val) {
memAttn.innerText = val;
}
</script>
</div>
</div>
<div id="Auto-Memory" class="var_sync_alt_system_experimental_features">
<h4 class="section_header"><label for="auto_memory">Auto-Memory (non-functional)</label></h4>
<div class="help_text">
What the system would use for automatic memory summarized from the game
</div>
<button class="settings_button" onclick="socket.emit('refresh_auto_memory', {});">Generate</button>
<textarea rows=20 id="auto_memory" class="var_sync_story_auto_memory fullwidth" oninput="autoResize(this)" autocomplete="off"></textarea>
</div>
</div>
<div id="story_menu_author" class="story_category_area tab-target tab-target-story hidden">
<div id="author_notes">
<h4 class="section_header">Author's Note</h4>
<div class="help_text">
Heavily influences the direction and style of the AI's writing.
</div>
<label for="authors_notes_template">Template:</label><br/>
<input autocomplete="off" id=authors_notes_template type=text class="var_sync_story_authornotetemplate fullwidth" onchange='sync_to_server(this);'><br/>
<label for="authors_notes">Author's Notes:</label><br/>
<textarea autocomplete="off" rows=16 id="authors_notes" class="var_sync_story_authornote var_sync_alt_story_authornote_length fullwidth" oninput="autoResize(this)" onchange='sync_to_server(this);'></textarea><br/>
<h4 class="section_header">Genre</h4>
<div class="help_text">Styles the AI will attempt to imitate. Effectiveness depends on model.</div>
<input id="genre-input" class="fullwidth" autocomplete="off" spellcheck="false">
<div id="genre-suggestion-super-container">
<div id="genre-suggestion-container">
<span class="genre-suggestion">Animals</span>
<span class="genre-suggestion">Amish</span>
<span class="genre-suggestion">Asian</span>
<span class="genre-suggestion">Buddhist</span>
</div>
</div>
<div id="genre-container"></div>
<div id="An-Attention-Bias" class="var_sync_alt_system_experimental_features">
<h4 class="section_header"><label for="An-Attention-Bias">Attention Bias Test</label></h4>
<span class="help_text">See disclaimer in memory page.</span>
<input type="range" oninput="anAttnInput(this.value);" onchange="sync_to_server(this);" class="var_sync_story_an_attn_bias sync_as_float" value="1" min="1" max="50" step="0.1"></input>
<span id="anattnbiaslabel">1</span>
<script>
const anAttn = document.querySelector("#anattnbiaslabel");
function anAttnInput(val) {
anAttn.innerText = val;
}
</script>
</div>
<div class="setting_tile_area">
{% with menu='author_notes' %}
{% with sub_path='' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
</div>
</div>
</div>
<div id="story_menu_notes" class="story_category_area tab-target tab-target-story hidden">
<div id="Notes">
<h4 class="section_header"><label for="notes">Notes</label></h4>
<div class="help_text">
Notes about the story. These notes are not read by the AI!
</div>
<textarea id="notes" autocomplete="off" rows=20 class="var_sync_story_notes fullwidth" onchange='sync_to_server(this);' oninput="autoResize(this)"></textarea>
</div>
</div>
<div id="story_menu_wi" class="story_category_area tab-target tab-target-story hidden">
<h4 class="section_header" style="margin-left: 12px;">World Info</h4>
<div class="help_text" style="margin-left: 20px;">
Lore information, which the AI recalls by certain words.
<span class="helpicon material-icons-outlined" tooltip="Use this instead of Memory for information on things like characters, objects, events, places, and anything else with detail.">help_icon</span>
</div>
<div class="setting_tile_area wi_settings">
{% with menu='World Info' %}
{% with sub_path='' %}
{% include 'settings item.html' %}
{% endwith %}
{% endwith %}
</div>
<div id="WI_Area">
<span id="world_info_folder_New Folder" class="WI_Folder">
<h2>
<span id="world_info_folder_collapse_root" class="wi_folder_collapser material-icons-outlined" folder="root">expand_more</span>
<span id="world_info_folder_expand_root" class="wi_folder_collapser material-icons-outlined hidden" folder="root">chevron_right</span>
<span class="material-icons-outlined" folder="root">folder</span>
<span class="wi_title" original_text="root" contenteditable="true">root</span>
</h2>
<span class="wi_add_button " onclick='create_new_wi_entry(this.getAttribute("folder"));'>
<span class="material-icons-outlined">post_add</span>
<span class="wi_add_text" folder="root">Add World Info Entry</span>
</span>
</span>
</div>
</div>
<div id="token-breakdown-container" class="settings_footer" style="padding-top: 10px;">
<div class="token_breakdown" onclick='socket.emit("update_tokens", document.getElementById("input_text").value);openPopup("context-viewer");'>
<div id="soft_prompt_tokens" style="width:0%; background-color: var(--context_colors_soft_prompt);"></div>
<div id="genre_tokens" style="width:40%; background-color: var(--context_colors_genre);"></div>
<div id="memory_tokens" style="width:40%; background-color: var(--context_colors_memory);"></div>
<div id="authors_notes_tokens" style="width:10%; background-color: var(--context_colors_authors_notes);"></div>
<div id="world_info_tokens" style="width:20%; background-color: var(--context_colors_world_info);"></div>
<div id="prompt_tokens" style="width:40%; background-color: var(--context_colors_prompt);"></div>
<div id="game_text_tokens" style="width:10%; background-color: var(--context_colors_game_text);"></div>
<div id="submit_tokens" style="width:20%; background-color: var(--context_colors_submit);"></div>
<div id="unused_tokens" style="width:20%; background-color: var(--context_colors_unused);"></div>
</div>
</div>
</div>

143
templates/templates.html Normal file
View File

@@ -0,0 +1,143 @@
<!---------------- World Info Card ---------------------->
<div draggable="true" class="world_info_card" id="world_info_">
<div class="world_info_title_area">
<div>
<div class="world_info_image_container">
<span
class="placeholder material-icons-outlined"
context-menu="wi-img-upload-button"
tooltip="Upload a picture for this World Info entry"
>add</span>
<img class="world_info_image">
</div>
<div>
<h4
draggable="true"
ondragstart="event.preventDefault();event.stopPropagation();"
id="world_info_title_"
class="world_info_title"
onfocus="this.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');"
></h4>
<span class="wi-type-leadup">is a</span>
<span
class="world_info_item_type"
contenteditable="true"
data-placeholder="Person"
spellcheck="false"
></span>
</div>
</div>
<span id="world_info_delete_" class="world_info_delete">X</span>
</div>
<div class="world_info_upper_container world_info_tag_area">
<div class="world_info_wpp_toggle_area" id="world_info_wpp_toggle_area_">
Use W++
</div>
<select class="world_info_type settings_select">
<option>Keywords</option>
<option>Always On</option>
<option>Chat Character</option>
<option>Commentator</option>
</select>
</div>
<div id="world_info_tags_" class="world_info_tag_area world_info_tag_primary_area">
<div>Requires one of:</div>
</div>
<div id="world_info_secondtags_" class="world_info_tag_area world_info_tag_secondary_area">
<div>And (if present):</div>
</div>
<div class="world_info_tag_area hidden world_info_wpp_area" id="world_info_wpp_area_">
<!--this part is very sensitive to location. Javascript uses parents to find the above tag for each of the inputs, so don't add stuff without messing with JS-->
<select id="wpp_format_" class="settings_select wpp_format">
<option>W++</option>
<option>Square Bracket Format (SBF)</option>
</select>
<table>
<tr>
<td>Type:</td>
<td><input draggable="true" type=text class="wpp_type" id="wpp_type_"
onchange="do_wpp(this.parentElement.parentElement.parentElement.parentElement.parentElement)"
onfocus="console.log('focus');this.parentElement.parentElement.parentElement.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');"
onblur="console.log('blur');this.parentElement.parentElement.parentElement.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');"
ondragstart="event.preventDefault();event.stopPropagation();" ></td>
</tr>
<tr>
<td>Name:</td>
<td><input draggable="true" ondragstart="event.preventDefault();event.stopPropagation();" type=text class="wpp_name" id="wpp_name_"
onchange="do_wpp(this.parentElement.parentElement.parentElement.parentElement.parentElement)"
onfocus="this.parentElement.parentElement.parentElement.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');"
onblur="this.parentElement.parentElement.parentElement.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');"></td>
</tr>
</table>
<div class="wpp_attributes_area"></div>
</div>
<div id="world_info_basic_text_" class="world_info_basic_text_area">
<div class="world_info_label_container wi-lc-text">
<span class="wi-ui-label">Text</span>
<span class="material-icons-outlined generate-button" tooltip="Generate Content">
smart_toy
</span>
</div>
<textarea draggable="true" ondragstart="event.preventDefault();event.stopPropagation();" id="world_info_entry_text_" class="world_info_text world_info_entry_text fullwidth" oninput="autoResize(this, 60)"
onfocus="this.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');autoResize(this, 60)"
onblur="this.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');this.style.height='60px';"></textarea>
</div>
<div id="world_info_entry_w++_" class="hidden">
<input draggable="true" ondragstart="event.preventDefault();event.stopPropagation();" type=text placeholder="Type"
onfocus="this.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');"
onblur="this.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');"/>
<input type=text placeholder="Name"
onfocus="this.parentElement.parentElement.setAttribute('draggable', 'false');"
onblur="this.parentElement.parentElement.setAttribute('draggable', 'true');"/>
<div>
<input draggable="true" ondragstart="event.preventDefault();event.stopPropagation();" type=text placeholder="attribute"
onfocus="this.parentElement.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');"
onblur="this.parentElement.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');"/>
<input type=text placeholder="value"/>
</div>
</div>
<div>
<div class="world_info_label_container wi-lc-comment">
<span class="wi-ui-label">Comment</span>
<padding></padding>
</div>
<textarea draggable="true" ondragstartondragstart="event.preventDefault();event.stopPropagation();" rows=1 id="world_info_comment_"
class="world_info_text world_info_comment fullwidth" oninput="autoResize(this, 60)" onfocus="autoResize(this, 60)" onblur="this.style.height='60px';"
onfocus="this.parentElement.parentElement.setAttribute('draggable', 'false');this.setAttribute('draggable', 'false');;"
onblur="this.parentElement.parentElement.setAttribute('draggable', 'true');this.setAttribute('draggable', 'true');"></textarea>
</div>
</div>
<!---------------- Phrase Bias ---------------------->
<div id="empty_bias">
<div class="bias_phrase">
<input type=text placeholder="Word or Phrase to Bias" onchange="save_bias(this);"/>
</div>
<div class="bias_score">
<div class="bias_slider">
<div class="bias_slider_bar">
<input type="range" min="-50" max="50" step="0.01" value="0" class="setting_item_input"
oninput="update_bias_slider_value(this);"
onchange="save_bias(this);"/>
</div>
<div class="bias_slider_min">-50</div>
<div class="bias_slider_cur">0</div>
<div class="bias_slider_max">50</div>
</div>
</div>
<div class="bias_comp_threshold">
<div class="bias_slider">
<div class="bias_slider_bar">
<input type="range" min="0" max="10" step="1" value="10" class="setting_item_input"
oninput="update_bias_slider_value(this);"
onchange="save_bias(this);"/>
</div>
<div class="bias_slider_min">0</div>
<div class="bias_slider_cur">10</div>
<div class="bias_slider_max">10</div>
</div>
</div>
</div>

View File

@@ -13,7 +13,7 @@ def client_data():
app = aiserver.app
#app.test_client_class = FlaskLoginClient
client_conn = app.test_client()
socketio_client = aiserver.socketio.test_client(app, flask_test_client=client_conn)
socketio_client = aiserver.socketio.test_client(app, flask_test_client=client_conn, query_string="ui=1")
#Clear out the connection message
response = socketio_client.get_received()
return (client_conn, app, socketio_client)

507
themes/Darkness.css Normal file
View File

@@ -0,0 +1,507 @@
/*
Name: Darkness
Author: LightSaveUs
Version: 0.4.1
Description: A theme inspired by the AI Dungeon interface.
*/
:root {
/*----------------Palette Theme--------------------*/
--primary_palette: #000000;
--on_primary_palette: #000000;
--primary_container_palette: #000000;
--on_primary_container_palette: #000000;
--secondary_palette: #000000;
--on_secondary_palette: #000000;
--secondary_container_palette: #000000;
--on_secondary_container_palette: #000000;
--tertiary_palette: #000000;
--on_tertiary_palette: #000000;
--tertiary_container_palette: #000000;
--on_tertiary_container_palette: #000000;
--background_palette: #000000;
--on_background_palette:#000000;
--layer1_palette: #000000;
--layer2_palette: #000000;
--layer3_palette: #000000;
--layer4_palette: #000000;
--outline_palette: #000000;
--middle_palette: #000000;
--on_middle_palette: #000000;
--surface_palette: #000000;
--on_surface_palette: #000000;
/*----------------Advanced Theme--------------------*/
/*General*/
--background: #141414;
--gamescreen_background: #141414;
--input_background: #141414;
--text: #e0e0e0;
--text_to_ai_color: #e0e0e0;
--text_edit: #34dae7;
--action_mode_input: #33E978;
--statusbar_color: #1e6d6aa1;
--statusbar_text_color: #e0e0e0;
--scrollbar-color: #1a615e00;
/*Buttons*/
/*General*/
--enabled_button_text: #e0e0e0;
--enabled_button_background_color: #3c3c3c;
--enabled_button_border_color: #2e2e2fe8;
--disabled_button_text: #303030;
--disabled_button_background_color: #495762;
--disabled_button_border_color: #686c68;
/*Home Tab*/
--button_text: #e0e0e0;
--button_background: #424243;
/*Alternate Button*/
--alternate_button_text: #e0e0e0;
--alternate_button_background: #424243;
/*Sequence, AKA Gens Per Action*/
--sequence_area_background: #141414;
--sequence_background: #141414;
--sequence_text: #e0e0e0;
/*Side Menus*/
--tab_color: #141414;
--flyout_background: #1c1c1c;
--flyout_background_pinned: #1c1c1c;
--setting_background: #242424;
--setting_text: #e0e0e0;
--sample_order_select_color: #148883;
--sample_order_select_color_text: #141414;
--dropdown_text: #e0e0e0;
--dropdown_background: #202020;
--rangeslider_background_color: #1c1c1c;
--rangeslider_color: #1c1c1c;
--rangeslider_circle_color: #148883;
--help_icon: #6e6e6e;
--tooltip_text: #e0e0e0;
--tooltip_background: #3b3b3b;
--setting_category_help_text_color: #e0e0e0;
--setting_footer_border_color: #333333;
--setting_footer_text_color: #e0e0e0;
--setting_footer_background_color: #1c1c1c;
/*Palette Card*/
--palette_card_background: #242424;
--palette_card_text: #e0e0e0;
--palette_table_border: #6e6e6e;
/*World Info*/
--wi_card_border_color: #333333;
--wi_card_border_color_to_ai: #1e6d6aa1;
--wi_card_bg_color: #242424;
--wi_card_text_color: #e0e0e0;
--wi_card_tag_bg_color: #3b3b3b;
--wi_card_tag_text_color: #e0e0e0;
--wi_tag_color: #555555;
--wi_tag_text_color: #e0e0e0;
/*Popup*/
--popup_background_color: #212121;
--popup_title_bar_color: #343434;
--popup_title_bar_color_text: #e0e0e0;
--popup_item_color: #212121;
--popup_item_color_text: #e0e0e0;
--popup_hover_color: #212121;
--popup_hover_color_text: #e0e0e0;
--popup_selected_color: #212121;
--popup_selected_color_text: #85858580;
--popup_button_color: #424243;
--popup_button_color_text: #e0e0e0;
--popup_cancel_button_color: #424243;
--popup_cancel_button_color_text: #e0e0e0;
--error: #242424;
--error_text: #e0e0e0;
--error_title: #343434;
--error_title_text: #e0e0e0;
/*Context Bar Colors*/
--context_colors_memory: #242424;
--context_colors_authors_notes: #333333;
--context_colors_world_info: #484848;
--context_colors_prompt: #2d9cac;
--context_colors_game_text: #148883;
--context_colors_submit: #ffffff00;
--context_colors_unused: #ffffff11;
--context_colors_soft_prompt: #141414;
/*Parameters*/
--scrollbar-size: 6px;
--palette_card_shadow: 0;
--wi_card_shadow: 0;
--light_shadow_value: 0;
--left_menu_strong_shadow: 0;
--right_menu_light_shadow: 0;
--right_menu_strong_shadow: 0;
--radius_inputbox: 10px;
--radius_unpinned_menu: 0px;
--radius_sequence: 10px;
--radius_settings_background: 10px;
--radius_button: 5px;
--radius_alternate_button: 5px;
--radius_item_popup: 5px;
--radius_wi_card: 10px;
--radius_palette_card: 10px;
--radius_settings_button: 5px;
--tabs_rounding: 5px;
/*----------------VARIABLES--------------------*/
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left:"menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right:"menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30pxvar(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30pxautovar(--story_options_size) 30px;
--story_pinned_areas:var(--story_pinned_areas_left);
--story_pinned_area_widths:var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;}
/*----------------Custom CSS--------------------*/
/* Boxes */
#input_text, #themetext {
border-color: #b6b6b6 !important;
}
.fullwidth {
border-radius: 5px;
border-color: #808080 !important;
}
.bias_phrase input {
border-color: #808080 !important;
border-radius:5px;
}
.sequence {
border: 1px solid #b6b6b6 !important;
}
.sequence:hover {
filter: brightness(100%) !important;
background-color: #2a2a2a;
}
.substitution-card input {
border-radius: 5px;
border-color: #535353 !important;
}
/* Buttons */
#adventure_mode {
border: 1px solid;
border-color: var(--enabled_button_border_color);
}
.action_button:hover {
filter: brightness(100%) !important;
}
.action_button:active {
color: #e0e0e0;
filter: brightness(60%) !important;
}
.action_button:focus {
color: #e0e0e0;
box-shadow: none;
outline: none !important;
}
.settings_button {
border: none !important;
}
.settings_button:active {
filter: brightness(60%) !important;
}
.settings_button:focus {
outline: none !important;
}
.btn-success {
color: #e0e0e0 !important;
border-color: #148883 !important;
background-color: #148883 !important;
}
.toggle-off.btn {
color: #e0e0e0 !important;
border-color: #242424 !important;
background-color: #242424 !important;
}
.advanced_theme:hover {
filter: brightness(100%) !important;
}
.advanced_theme:active {
filter: brightness(60%) !important;
}
.wi_add_button:active {
filter: brightness(60%) !important;
}
#new-sub-card:active {
filter: brightness(60%) !important;
}
#debug-dump:active {
filter: brightness(60%) !important;
}
/* Context Menu */
#context-menu {
border-color: #333333 !important;
background-color: #1c1c1c !important;
}
#context-menu > hr {
border-color: #333333 !important;
}
.context-menu-item:hover {
background-color:#333333 !important;
}
/* Context Viewer */
.popup-header {
background-color: #141414 !important;
}
#context-container {
background-color: #2b2b2b !important;
}
/* Finder */
.result-highlight {
background-color: #148883 !important;
}
/* Icons */
span.material-icons-outlined.cursor.search_icon {
filter: brightness(120%) !important;
}
.helpicon:hover {
color: #848484 !important;
}
.collapsable_header .material-icons-outlined {
transform: translateY(7px) translateX(4px);
}
.material-icons-outlined:hover {
filter: brightness(100%) !important;
}
.substitution-card > .card-left > .material-icons-outlined {
color: #999 !important;
}
.substitution-card > .card-left > .material-icons-outlined:hover {
color: #e0e0e0 !important;
}
.true-t + label::before {
filter: brightness(85%);
color: #999 !important;
}
.true-t:checked + label::before {
filter: brightness(100%);
color: #e0e0e0 !important;
}
/* Import */
a {
color: #148883 !important;
}
.form-control {
border: none;
border-radius: 0px;
color: #e0e0e0 !important;
transform: translateX(5px)!important;
background-color: #333333 !important;
}
/* Lines */
.story_title, hr {
border-color: #333333 !important;
}
/* Lists */
.settings_select, .var_sync_system_theme_list {
border-radius:5px !important;
border-color: #535353 !important;
}
/* Palette */
#palette_area {
border: 1px solid #535353 !important;
}
/* Popup */
.popup .model_item {
border: 1px solid #333333;
}
.popup .item {
border: 1px solid #333333;
}
.popup .action_button {
border-radius: 5px;
border-color:#535353;
}
.popup_load_cancel_button {
border-radius: 5px;
border-color:#535353 !important;
}
.popup .action_button:hover {
filter: brightness(100%) !important;
background-color: #424243 !important;
}
.popup_load_cancel_button:hover {
outline: none !important;
color: #e0e0e0 !important;
border-color: #535353 !important;
background-color: #424243 !important;
}
.popup .action_button:focus {
outline: none;
box-shadow: none;
}
.popup_load_cancel_button:focus {
outline: none;
box-shadow: none;
color: #e0e0e0 !important;
border-color: #535353 !important;
background-color: #424243 !important;
}
.popup .action_button:active {
filter: brightness(60%) !important;
}
.popup_load_cancel_button:active {
outline: none !important;
filter: brightness(60%) !important;
}
#error_message.popup .btn-primary {
color: #e0e0e0 !important;
border-color: #535353 !important;
background-color: #424243 !important;
}
/* Tabs */
.tabrow span {
border: none !important;
box-shadow: inset 0px 0px 1px !important;
}
.tabrow span::before, .tabrow span::after {
display: none;
border: none !important;
}
.tabrow span.selected {
color: #e0e0e0 !important;
background: #1c1c1c !important;
}
.tabrow span:active {
filter: brightness(60%) !important;
}
/* Text */
.rawtext {
font-family: helvetica !important;
}
.rawtext:focus {
outline: none !important;
}
/* Tooltips */
.tooltip-standard {
border: none !important;
}
/* World Info */
.world_info_text {
border-color: #242424 !important;
background-color: #3b3b3b !important;
}
.world_info_title:focus {
outline: none !important;
}
.WI_Folder_Header .title:focus {
outline: none !important;
}

323
themes/Gruvbox Dark.css Normal file
View File

@@ -0,0 +1,323 @@
/*
Name: Gruvbox Dark
Author: one-some
Description: Port of the standard dark Gruvbox theme.
*/
:root {
--gruvbox-dark0-hard: #1d2021;
--gruvbox-dark0: #282828;
--gruvbox-dark0-soft: #32302f;
--gruvbox-dark1: #3c3836;
--gruvbox-dark2: #504945;
--gruvbox-dark3: #665c54;
--gruvbox-dark4: #7c6f64;
--gruvbox-gray-245: #928374;
--gruvbox-gray-244: #928374;
--gruvbox-light0-hard: #f9f5d7;
--gruvbox-light0: #fbf1c7;
--gruvbox-light0-soft: #f2e5bc;
--gruvbox-light1: #ebdbb2;
--gruvbox-light2: #d5c4a1;
--gruvbox-light3: #bdae93;
--gruvbox-light4: #a89984;
--gruvbox-bright-red: #fb4934;
--gruvbox-bright-green: #b8bb26;
--gruvbox-bright-yellow: #fabd2f;
--gruvbox-bright-blue: #83a598;
--gruvbox-bright-purple: #d3869b;
--gruvbox-bright-aqua: #8ec07c;
--gruvbox-bright-orange: #fe8019;
--gruvbox-neutral-red: #cc241d;
--gruvbox-neutral-green: #98971a;
--gruvbox-neutral-yellow: #d79921;
--gruvbox-neutral-blue: #458588;
--gruvbox-neutral-purple: #b16286;
--gruvbox-neutral-aqua: #689d6a;
--gruvbox-neutral-orange: #d65d0e;
--gruvbox-faded-red: #9d0006;
--gruvbox-faded-green: #79740e;
--gruvbox-faded-yellow: #b57614;
--gruvbox-faded-blue: #076678;
--gruvbox-faded-purple: #8f3f71;
--gruvbox-faded-aqua: #427b58;
--gruvbox-faded-orange: #af3a03;
--main_accent: var(--gruvbox-neutral-purple);
/* Palette */
--primary_palette: #000000;
--on_primary_palette: var(--gruvbox-light0);
--primary_containter_palette: #000000;
--on_primary_containter_palette: #000000;
--secondary_palette: #000000;
--on_secondary_palette: #000000;
--secondary_container_palette: #000000;
--on_secondary_container_palette: #000000;
--tertiary_palette: #000000;
--on_tertiary_palette: #000000;
--tertiary_container_palette: #000000;
--on_tertiary_container_palette: #000000;
--error_palette: #000000;
--on_error_palette: #000000;
--error_container_palette: #000000;
--on_error_container_palette: #000000;
--background_palette: var(--gruvbox-dark0-soft);
--on_background_palette: var(--gruvbox-light0);
--layer1_palette: var(--gruvbox-dark1);
--layer2_palette: var(--gruvbox-dark2);
--layer3_palette: var(--gruvbox-dark3);
--layer4_palette: var(--gruvbox-dark4);
--outline_palette: #8e9099;
--middle_palette: #232328;
--on_middle_palette: #86868e;
--surface_palette: #2f2f36;
--on_surface_palette: #e3e2e6;
/* Advanced */
/* General */
--background: var(--gruvbox-dark0);
--gamescreen_background: var(--gruvbox-dark0-soft);
--input_background: var(--gruvbox-dark1);
--text: var(--gruvbox-light0);
--text_to_ai_color: var(--gruvbox-bright-blue);
--text_edit: var(--gruvbox-neutral-aqua);
--action_mode_input: #33E978;
--statusbar_color: #002d6ca1;
--statusbar_text_color: white;
--scrollbar-color: #3e536780;
/* Buttons */
--enabled_button_text: var(--gruvbox-light0);
--enabled_button_background_color: var(--gruvbox-dark1);
--enabled_button_border_color: transparent;
--disabled_button_text: var(--gruvbox-dark4);
--disabled_button_background_color: var(--gruvbox-dark1);
--disabled_button_border_color: transparent;
/* Sequence, AKA Gens Per Action */
--sequence_area_background: var(--gruvbox-dark1);
--sequence_background: var(--gruvbox-dark2);
--sequence_text: var(--gruvbox-light0);
/* Side Menus */
--tab_color: var(--gruvbox-dark2);
--flyout_background: var(--gruvbox-dark0-soft);
--flyout_background_pinned: var(--gruvbox-dark0-soft);
--setting_background: var(--gruvbox-dark1);
--setting_text: var(--gruvbox-light0);
--button_text: var(--setting_text);
--button_background: var(--gruvbox-dark2);
--sample_order_select_color: var(--gruvbox-dark2);
--sample_order_select_color_text: var(--gruvbox-light2);
--dropdown_text: var(--gruvbox-light0);
--dropdown_background: var(--gruvbox-dark0-soft);
--rangeslider_background_color: var(--gruvbox-dark0-soft);
--rangeslider_color: var(--main_accent);
--rangeslider_circle_color: var(--gruvbox-light2);
--help_icon: var(--gruvbox-light0);
--help_icon_text: #282c2c;
--tooltip_text:var(--gruvbox-light0);
--tooltip_background:var(--secondary_palette);
/* World Info */
--wi_card_border_color: var(--gruvbox-dark3);
--wi_card_border_color_to_ai:var(--gruvbox-light0);
--wi_card_bg_color: var(--gruvbox-dark0);
--wi_card_text_color:var(--gruvbox-light0);
--wi_card_tag_bg_color: var(--gruvbox-dark0);
--wi_card_tag_text_color: var(--gruvbox-light0);
--wi_tag_color: var(--gruvbox-neutral-blue);
--wi_tag_text_color:var(--gruvbox-light0);
--wi_folder_background: var(--gruvbox-dark0);
/* Popup */
--popup_background_color: var(--gruvbox-dark0);
--popup_title_bar_color: var(--gruvbox-dark2);
--popup_title_bar_color_text: var(--gruvbox-light0);
--popup_item_color: var(--gruvbox-dark1);
--popup_item_color_text: var(--gruvbox-light0);
--popup_hover_color: var(--gruvbox-dark1);
--popup_hover_color_text: var(--gruvbox-light0);
--popup_selected_color: var(--gruvbox-dark3);
--popup_selected_color_text: var(--on_secondary_container_palette_palette);
--popup_button_color: var(--gruvbox-dark1);
--popup_button_color_text: var(--gruvbox-light0);
--popup_cancel_button_color: var(--gruvbox-dark1);
--popup_cancel_button_color_text: var(--gruvbox-light0);
/* Parameters */
--scrollbar-size: 6px;
--light_shadow_value: 0;
--left_menu_strong_shadow: 0;
--right_menu_light_shadow: 0;
--right_menu_strong_shadow: 0;
--radius_inputbox: 0px;
--radius_unpinned_menu: 0px;
--radius_sequence: 0px;
--radius_settings_background: 0px;
--radius_buttons: 0px;
--radius_item_popup: 0px;
--radius_wi_card: 0px;
--radius_settings_button: 5px;
--radius_settings_button: 0px;
--radius_alternate_button: 0px;
--radius_palette_card: 0px;
--tabs_rounding: 6px;
/* Variables */
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left: "menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right: "menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30px var(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30px auto var(--story_options_size) 30px;
--story_pinned_areas: var(--story_pinned_areas_left);
--story_pinned_area_widths: var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;
/*Context Bar Colors*/
--context_colors_memory: var(--gruvbox-faded-blue);
--context_colors_authors_notes: var(--gruvbox-faded-yellow);
--context_colors_world_info: var(--gruvbox-faded-purple);
--context_colors_prompt: var(--gruvbox-faded-orange);
--context_colors_game_text: var(--gruvbox-neutral-purple);
--context_colors_submit: var(--gruvbox-light4);
--context_colors_unused: var(--gruvbox-light0);
--context_colors_soft_prompt: var(--gruvbox-neutral-aqua);
}
/* Overrides */
input, textarea{
border: 1px solid var(--gruvbox-dark3);
border-color: var(--gruvbox-dark3) !important;
}
select {
border: 1px solid var(--gruvbox-dark2);
border-radius: 0px !important;
padding: 8px;
}
hr, #token-breakdown-container, .tabrow::before {
border-color: var(--gruvbox-light4) !important;
}
.setting_item > select {
background-color: var(--gruvbox-dark1);
}
#gamescreen {
color: var(--gruvbox-light0);
margin-bottom: 7px;
}
#input_text, #themetext {
border: none !important;
}
.settings_category_area .action_button {
--enabled_button_background_color: var(--gruvbox-dark0-soft);
--disabled_button_background_color: var(--gruvbox-dark0-soft);
}
.action_button {
border: none;
}
.action_button {
border-radius: 0px;
border: 1px solid var(--gruvbox-dark2);
}
.btn {
color: var(--gruvbox-light0);
}
.btn-success {
background-color: var(--gruvbox-neutral-aqua);
border-color: var(--gruvbox-neutral-aqua);
}
.settings_button {
border: 0px !important;
}
/* Firefox hacks, add this to upstream when done testing */
input[type="range"]::-moz-range-thumb {
background-color: var(--rangeslider_circle_color);
border-radius: 50%;
border-color: transparent;
appearance: none;
width: 16px;
height: 16px;
}
/* Sequence stuff */
.sequence {
border: 2px solid var(--gruvbox-dark3) !important;
}
.sequence_row {
/* TODO: Upstream */
grid-gap: 5px;
padding-right: 5px;
}
/* Tabs */
.tabrow span {
border: none !important;
border-radius: 0px !important;
box-shadow: none !important;
}
.tabrow span.selected {
background-color: var(--gruvbox-light0) !important;
}
.tabrow span::before, .tabrow span::after {
display: none;
border: none !important;
}
.menubar1, .menubar2, .menubar3 {
background-color: var(--gruvbox-light4) !important;
}
#loadmodelcontainer, #popup {
border: 2px solid var(--gruvbox-dark3);
border-radius: 0px !important;
}
.popup_load_cancel_button {
border: none;
border-radius: 0px;
}

204
themes/Material You.css Normal file
View File

@@ -0,0 +1,204 @@
/*
Name: Material You
Author: GuiAworld
Description: A theme that demonstrates the power of the Palette system. Based off of Google's Material You.
*/
:root {
/*----------------Palette Theme--------------------*/
--primary_palette: #afc6ff;
--on_primary_palette: #002d6c;
--primary_container_palette: #004397;
--on_primary_container_palette: #d9e2ff;
--secondary_palette: #f7c5ee;
--on_secondary_palette: #5c0059;
--secondary_container_palette: #d663bd;
--on_secondary_container_palette: #4e0039;
--tertiary_palette: #a8d473;
--on_tertiary_palette: #1f3700;
--tertiary_container_palette: #2f4f00;
--on_tertiary_container_palette: #c3f18c;
--background_palette: #1b1b1f;
--on_background_palette:#e3e2e6;
--layer1_palette: #212126;
--layer2_palette: #28282D;
--layer3_palette: #2F2F35;
--layer4_palette: #35353D;
--outline_palette: #8e9099;
--middle_palette: #232328;
--on_middle_palette: #86868e;
--surface_palette: #2f2f36;
--on_surface_palette: #e3e2e6;
/*----------------Advanced Theme--------------------*/
/*General*/
--background: var(--background_palette);
--gamescreen_background: var(--layer2_palette);
--gamescreen_text: var(--on_background_palette);
--input_background: var(--layer3_palette);
--input_text: var(--on_background_palette);
--text: var(--on_background_palette);
--text_to_ai_color: var(--on_background_palette);
--text_edit: var(--on_background_palette);
--action_mode_input: #33E978;
--statusbar_color: #002d6ca1;
--statusbar_text_color: white;
--scrollbar-color: #9b9b9b80;
/*Buttons*/
/*General*/
--enabled_button_text: var(--on_primary_palette);
--enabled_button_background_color: var(--primary_palette);
--enabled_button_border_color: var(--on_primary_palette);
--disabled_button_text: #303030;
--disabled_button_background_color: #495762;
--disabled_button_border_color: #686c68;
/*Home Tab*/
--button_text: var(--on_primary_palette);
--button_background: var(--primary_palette);
/*Alternate Button*/
--alternate_button_text: var(--on_primary_container_palette);
--alternate_button_background: var(--primary_container_palette);
/*Sequence, AKA Gens Per Action*/
--sequence_area_background: var(--layer1_palette);
--sequence_background: var(--primary_palette);
--sequence_text: var(--on_primary_palette);
/*Side Menus*/
--tab_color: var(--primary_container_palette);
--tab_text: var(--on_background_palette);
--flyout_background: var(--layer1_palette);
--flyout_background_pinned: var(--layer1_palette);
--flyout_text: var(--on_background_palette);
--setting_background: var(--surface_palette);
--setting_text: var(--on_surface_palette);
--sample_order_select_color: var(--primary_palette);
--sample_order_select_color_text: var(--on_primary_palette);
--dropdown_text: var(--on_secondary_palette);
--dropdown_background: var(--secondary_palette);
--rangeslider_background_color: var(--on_primary_container_palette);
--rangeslider_color: var(--primary_container_palette);
--rangeslider_circle_color: var(--on_primary_palette);
--help_icon: var(--secondary_palette);
/*--tooltip_text: var(--on_secondary_palette);
--tooltip_background: var(--secondary_palette);*/
--setting_category_help_text_color: #E0E0E0;
--setting_footer_border_color: grey;
--setting_footer_text_color: var(--on_surface_palette);
--setting_footer_background_color: var(--layer1_palette);
/*Palette Card*/
--palette_card_background: var(--primary_palette);
--palette_card_text: var(--on_primary_palette);
--palette_table_border: var(--on_primary_palette);
/*World Info*/
--wi_card_border_color: var(--on_primary_palette);
--wi_card_border_color_to_ai: var(--on_secondary_palette);
--wi_card_bg_color: var(--primary_palette);
--wi_card_text_color: var(--on_primary_palette);
--wi_card_tag_bg_color: var(--primary_palette);
--wi_card_tag_text_color:var(--on_primary_palette);
--wi_tag_color: var(--primary_container_palette);
--wi_tag_text_color: var(--on_primary_container_palette);
/*Popup*/
--popup_background_color: var(--layer4_palette);
--popup_title_bar_color: var(--primary_palette);
--popup_title_bar_color_text: var(--on_primary_palette);
--popup_item_color: var(--surface_palette);
--popup_item_color_text: var(--on_surface_palette);
--popup_hover_color: var(--secondary_palette);
--popup_hover_color_text: var(--on_secondary_palette);
--popup_selected_color: var(--secondary_container_palette);
--popup_selected_color_text: var(--on_secondary_container_palette);
--popup_button_color: var(--primary_container_palette);
--popup_button_color_text: var(--on_primary_container_palette);
--popup_cancel_button_color: var(--primary_palette);
--popup_cancel_button_color_text: var(--on_primary_palette);
--error: #ffb4ab;
--error_text: #690005;
--error_title: #93000a;
--error_title_text: #ffdad6;
/*Context Bar Colors*/
--context_colors_memory: #3572A5;
--context_colors_authors_notes: #f1e05a;
--context_colors_world_info: #563d7c;
--context_colors_prompt: #e34c26;
--context_colors_game_text: #c6538c;
--context_colors_submit: #ededed;
--context_colors_unused: white;
--context_colors_soft_prompt: #000080;
/*Parameters*/
--scrollbar-size: 6px;
--light_shadow_value: 0 0px 8px 0 rgba(0, 0, 0, 0.2), 0px 0px 20px 0 rgba(0, 0, 0, 0.19);
--palette_card_shadow: 0 0px 8px 0 rgb(0 0 0 / 20%), 0px 11px 20px 8px rgb(0 0 0 / 19%);
--wi_card_shadow: 0 0px 8px 0 rgb(0 0 0 / 20%), 0px 11px 20px 8px rgb(0 0 0 / 19%);
--left_menu_light_shadow: 0 0px 8px 0 rgba(0, 0, 0, 0.2), 0px 0px 20px 0 rgba(0, 0, 0, 0.19);
--left_menu_strong_shadow: 0 0px 8px 0 rgba(0, 0, 0, 0.2), 25px 0px 20px 0 rgba(0, 0, 0, 0.19);
--right_menu_light_shadow: 0 0px 8px 0 rgba(0, 0, 0, 0.2), 0px 0px 20px 0 rgba(0, 0, 0, 0.19);
--right_menu_strong_shadow: 0 0px 8px 0 rgba(0, 0, 0, 0.2), -25px 0px 20px 0 rgba(0, 0, 0, 0.19);
--popup_shadow: 0 0 35px 20px#1b1b1f;
--radius_inputbox: 10px;
--radius_unpinned_menu: 20px;
--radius_sequence: 10px;
--radius_settings_background: 5px;
--radius_button: 5px;
--radius_alternate_button: 5px;
--radius_item_popup: 2px;
--radius_wi_card: 5px;
--radius_palette_card: 5px;
--radius_settings_button: 5px;
--tabs_rounding: 6px;
/*----------------VARIABLES--------------------*/
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left: "menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right: "menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30px var(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30px auto var(--story_options_size) 30px;
--story_pinned_areas: var(--story_pinned_areas_left);
--story_pinned_area_widths: var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;
}

541
themes/Monochrome.css Normal file
View File

@@ -0,0 +1,541 @@
/*
Name: Monochrome
Author: LightSaveUs
Version: 0.7.1
Description: A theme inspired by the NovelAI interface.
*/
:root {
/*----------------Palette Theme--------------------*/
--primary_palette: #000000;
--on_primary_palette: #000000;
--primary_container_palette: #000000;
--on_primary_container_palette: #000000;
--secondary_palette: #000000;
--on_secondary_palette: #000000;
--secondary_container_palette: #000000;
--on_secondary_container_palette: #000000;
--tertiary_palette: #000000;
--on_tertiary_palette: #000000;
--tertiary_container_palette: #000000;
--on_tertiary_container_palette: #000000;
--background_palette: #000000;
--on_background_palette:#000000;
--layer1_palette: #000000;
--layer2_palette: #000000;
--layer3_palette: #000000;
--layer4_palette: #000000;
--outline_palette: #000000;
--middle_palette: #000000;
--on_middle_palette: #000000;
--surface_palette: #000000;
--on_surface_palette: #000000;
/*----------------Advanced Theme--------------------*/
/*General*/
--background: #252e3b;
--gamescreen_background: #111820;
--input_background: #111820;
--text: #e0e0e0;
--text_to_ai_color: #e0e0e0;
--text_edit: #9cc3ee;
--action_mode_input: #33E978;
--statusbar_color: #eedcb880;
--statusbar_text_color: #e0e0e0;
--scrollbar-color: #2f3b4bdb;
/*Buttons*/
/*General*/
--enabled_button_text: #e0e0e0;
--enabled_button_background_color: #2d3d52;
--enabled_button_border_color: #253446;
--disabled_button_text: #303030;
--disabled_button_background_color: #495762;
--disabled_button_border_color: #686c68;
/*Home Tab*/
--button_text: #e0e0e0;
--button_background: #283445;
/*Alternate Button*/
--alternate_button_text: #e0e0e0;
--alternate_button_background: #283445;
/*Sequence, AKA Gens Per Action*/
--sequence_area_background: #111820;
--sequence_background: #eedcb8;
--sequence_text: #e0e0e0;
/*Side Menus*/
--tab_color: #243047;
--flyout_background: #18222d;
--flyout_background_pinned: #18222d;
--setting_background: #273141;
--setting_text: #e0e0e0;
--sample_order_select_color: #1f2934;
--sample_order_select_color_text: #eedcb8;
--dropdown_text: #e0e0e0;
--dropdown_background: #212935;
--rangeslider_background_color: #1f2934;
--rangeslider_color: #1f2934;
--rangeslider_circle_color: #404d64;
--help_icon: #7c8389;
--tooltip_text: #e0e0e0;
--tooltip_background: #303c50;
--setting_category_help_text_color: #E0E0E0;
--setting_footer_border_color: #334552;
--setting_footer_text_color: #e0e0e0;
--setting_footer_background_color: #18222d;
/*Palette Card*/
--palette_card_background: #273141;
--palette_card_text: #e0e0e0;
--palette_table_border: #607c90;
/*World Info*/
--wi_card_border_color: #334552;
--wi_card_border_color_to_ai: #eedcb880;
--wi_card_bg_color: #223040;
--wi_card_text_color: #e0e0e0;
--wi_card_tag_bg_color: #1d2835;
--wi_card_tag_text_color: #e0e0e0;
--wi_tag_color: #283445;
--wi_tag_text_color: #e0e0e0;
/*Popup*/
--popup_background_color: #1a2530;
--popup_title_bar_color: #283445;
--popup_title_bar_color_text: #e0e0e0;
--popup_item_color: #1a2530;
--popup_item_color_text: #e0e0e0;
--popup_hover_color: #1e2733;
--popup_hover_color_text: #e0e0e0;
--popup_selected_color: #242d3c;
--popup_selected_color_text: #eedcb8;
--popup_button_color: #283445;
--popup_button_color_text: #e0e0e0;
--popup_cancel_button_color: #25364a;
--popup_cancel_button_color_text: #e0e0e0;
--error: #19242c;
--error_text: #e0e0e0;
--error_title: #25364a;
--error_title_text: #e0e0e0;
/*Context Bar Colors*/
--context_colors_memory: #04325c;
--context_colors_authors_notes: #165a62;
--context_colors_world_info: #1864a3;
--context_colors_prompt: #868686;
--context_colors_game_text: #63710e;
--context_colors_submit: #ffffff00;
--context_colors_unused: #ffffff24;
--context_colors_soft_prompt: #141414;
--context_colors_genre: #2c5c88;
/*Parameters*/
--scrollbar-size: 6px;
--palette_card_shadow: 0;
--wi_card_shadow: 0;
--light_shadow_value: 0;
--left_menu_strong_shadow: 0;
--right_menu_light_shadow: 0;
--right_menu_strong_shadow: 0;
--radius_inputbox: 2px;
--radius_unpinned_menu: 2px;
--radius_sequence: 5px;
--radius_settings_background: 2px;
--radius_button: 2px;
--radius_alternate_button: 2px;
--radius_item_popup: 2px;
--radius_wi_card: 5px;
--radius_palette_card: 5px;
--radius_settings_button: 2px;
--tabs_rounding: 2px;
/*----------------VARIABLES--------------------*/
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left:"menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right:"menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30pxvar(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30pxautovar(--story_options_size) 30px;
--story_pinned_areas:var(--story_pinned_areas_left);
--story_pinned_area_widths:var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;}
/*----------------Custom CSS--------------------*/
/* Boxes */
.gamescreen {
border: 1px solid #2f3b4b;
}
.sequence_area {
border-top: 1px solid #2f3b4b;
border-right: 1px solid #2f3b4b;
border-bottom: 1px solid #2f3b4b;
}
#input_text, #themetext {
border-color: #2f3b4b !important;
}
.fullwidth {
border-radius: 2px;
border-color: #2f3b4b !important;
}
.bias_phrase input {
border-radius:2px !important;
border-color: #2f3b4b !important;
}
.substitution-card input {
border-radius:2px !important;
border-color: #2f3b4b !important;
}
.sequence {
color: #121c22 !important;
border: 1px solid #eedcb8 !important;
}
.sequence:hover {
filter: brightness(100%) !important;
background-color: #e0e0e0; !important;
}
/* Buttons */
#adventure_mode {
border: 1px solid;
border-color: var(--enabled_button_border_color);
}
.action_button:focus {
color: #e0e0e0;
outline: none !important;
box-shadow: none !important;
}
.settings_button {
border: none !important;
}
.settings_button:hover {
filter: brightness(120%);
}
.settings_button:focus {
outline: none !important;
}
.btn-success {
color: #121c22 !important;
border-color: #eedcb8 !important;
background-color: #eedcb8 !important;
}
.toggle-off.btn {
color: #eedcb8 !important;
border-color: #131c22 !important;
background-color: #131c22 !important;
}
.toggle-off.btn:hover {
border-color: #eedcb8 !important;
}
.wi_add_button:hover {
filter: brightness(120%);
}
.advanced_theme:hover {
filter: brightness(120%) !important;
}
.settings_button[story_gamesaved="true"] {
filter: brightness(40%);
}
#new-sub-card:hover {
filter: brightness(120%) !important;
}
#debug-dump:hover {
filter: brightness(120%) !important;
}
/* Context Menu */
#context-menu {
border-color: #2f3b4b !important;
background-color: #18222d !important;
}
#context-menu > hr {
border-color: #2f3b4b !important;
}
.context-menu-item:hover {
background-color:#2f3b4b !important;
}
/* Context Viewer */
.popup-header {
background-color: #1e2f44 !important;
}
#context-container {
background-color: #273141 !important;
}
/* Finder */
.result-highlight {
background-color: #aea186 !important;
}
/* Icons */
.material-icons-outlined.cursor:hover {
transform: scale(1.1);
filter: brightness(100%) !important;
}
span.helpicon.material-icons-outlined:hover:not(::after) {
transform: scale(1) !important;
}
.collapsable_header .material-icons-outlined {
transform: translateY(7px) translateX(4px);
}
.collapsable_header .material-icons-outlined:hover {
transform: translateY(7px) translateX(4px);
}
#context-viewer-close:hover {
transform: scale(1) !important;
}
.substitution-card > .card-left > .material-icons-outlined {
color: #999 !important;
}
.substitution-card > .card-left > .material-icons-outlined:hover {
color: #e0e0e0 !important;
}
.true-t + label::before {
filter: brightness(85%);
color: #999 !important;
}
.true-t:checked + label::before {
filter: brightness(100%);
color: #e0e0e0 !important;
}
/* Import */
a {
color: #e0e0e0 !important;
}
.form-control {
border: none;
border-radius: 0px;
color: #e0e0e0 !important;
transform: translateX(5px)!important;
background-color: #25323d !important;
}
/* Lines */
.story_title, hr {
border-color: #2f3b4b !important;
}
.rightSideMenu {
border-left: 1px solid #2f3b4b;
}
.SideMenu {
border-right: 1px solid #2f3b4b;
}
/* Lists */
.settings_select, .var_sync_system_theme_list {
border-radius:2px !important;
border-color: #2f3b4b !important;
}
.settings_footer {
border-top: 1px solid #2f3b4b !important;
}
/* Palette */
#palette_area {
border-color: #2f3b4b !important;
}
/* Popup */
.popup .model_item {
border: 1px solid #2f3b4b;
}
.popup .item {
border: 1px solid #2f3b4b;
}
.popup .item.selected {
filter: brightness(110%) !important;
}
.popup .item:hover{
filter: brightness(110%) !important;
}
.popup .action_button {
border-radius: 2px;
border-color:#2f3b4b;
background-color:#283445 !important;
}
.popup_load_cancel_button {
border-radius: 2px;
border-color:#2f3b4b !important;
background-color:#283445 !important;
}
.popup .action_button:hover {
border-color: #2f3b4b00 !important;
filter: brightness(120%) !important;
background-color: #283445 !important;
}
.popup_load_cancel_button:hover {
color: #e0e0e0 !important;
filter: brightness(120%) !important;
border-color: #2f3b4b00 !important;
background-color: #283445 !important;
}
.popup .action_button:focus {
outline: none !important;
box-shadow: none !important;
color: #e0e0e0 !important;
}
.popup_load_cancel_button:focus {
outline: none !important;
box-shadow: none !important;
color: #e0e0e0 !important;
}
/* Tabs */
.tabrow span {
border: none !important;
box-shadow: inset 0px 0px 2px #101214 !important;
}
.tabrow span::before, .tabrow span::after {
display: none;
border: none !important;
}
.tabrow span:hover {
background: #374357 !important;
}
.tabrow span.selected {
color: #eedcb8 !important;
background: #374357 !important;
}
/* Text */
.rawtext {
font-family: sans-serif !important;
}
.rawtext:focus {
outline: none !important;
}
/* Tooltips */
.tooltip-standard {
border: none !important;
}
/* World Info */
.tag:hover {
filter: brightness(120%) !important;
}
.world_info_tag_area {
filter: brightness(100%) !important;
background-color: #1d2835 !important;
}
.world_info_text {
border-color: #2f3b4b !important;
filter: brightness(100%) !important;
background-color: #1d2835 !important;
}
.world_info_title:hover {
filter: brightness(120%) !important;
}
.world_info_title:focus {
color: #eedcb8 !important;
filter: brightness(100%) !important;
outline: none !important;
}
.WI_Folder_Header .title:focus {
outline: none !important;
}

508
themes/Nostalgia.css Normal file
View File

@@ -0,0 +1,508 @@
/*
Name: Nostalgia
Author: LightSaveUs
Version: 0.4
Description: A theme inspired by the old KoboldAI interface.
*/
:root {
/*----------------Palette Theme--------------------*/
--primary_palette: #000000;
--on_primary_palette: #000000;
--primary_container_palette: #000000;
--on_primary_container_palette: #000000;
--secondary_palette: #000000;
--on_secondary_palette: #000000;
--secondary_container_palette: #000000;
--on_secondary_container_palette: #000000;
--tertiary_palette: #000000;
--on_tertiary_palette: #000000;
--tertiary_container_palette: #000000;
--on_tertiary_container_palette: #000000;
--background_palette: #000000;
--on_background_palette:#000000;
--layer1_palette: #000000;
--layer2_palette: #000000;
--layer3_palette: #000000;
--layer4_palette: #000000;
--outline_palette: #000000;
--middle_palette: #000000;
--on_middle_palette: #000000;
--surface_palette: #000000;
--on_surface_palette: #000000;
/*----------------Advanced Theme--------------------*/
/*General*/
--background: #303030;
--gamescreen_background: #262626;
--input_background: #404040;
--text: #fff;
--text_to_ai_color: #fff;
--text_edit: #cdf;
--action_mode_input: #33E978;
--statusbar_color: #3bf72380;
--statusbar_text_color: #ffffff;
--scrollbar-color: #74747400;
/*Buttons*/
/*General*/
--enabled_button_text: #ffffff;
--enabled_button_background_color: #337ab7;
--enabled_button_border_color: #2e6da4;
--disabled_button_text: #303030;
--disabled_button_background_color: #495762;
--disabled_button_border_color: #686c68;
/*Home Tab*/
--button_text: #ffffff;
--button_background: #337ab7;
/*Alternate Button*/
--alternate_button_text: #ffffff;
--alternate_button_background: #3379b7;
/*Sequence, AKA Gens Per Action*/
--sequence_area_background: #262626;
--sequence_background: #262626;
--sequence_text: #ffffff;
/*Side Menus*/
--tab_color: #4787be;
--flyout_background: #295071;
--flyout_background_pinned: #295071;
--setting_background: #295071;
--setting_text: #ffffff;
--sample_order_select_color: #688f1f;
--sample_order_select_color_text: #ffffff;
--dropdown_text: #ffffff;
--dropdown_background: #1f3a50;
--rangeslider_background_color: #ffffff;
--rangeslider_color: #0075ff;
--rangeslider_circle_color: #2285f9;
--help_icon: #ffffff;
--tooltip_text: #ffffff;
--tooltip_background: #1f2931;
--setting_category_help_text_color: #3bf723;
--setting_footer_border_color: grey;
--setting_footer_text_color: #ffffff;
--setting_footer_background_color: #242424;
/*Palette Card*/
--palette_card_background: #295071;
--palette_card_text: #ffffff;
--palette_table_border: #12324f;
/*World Info*/
--wi_card_border_color: #1e1e1e;
--wi_card_border_color_to_ai: #3bf72380;
--wi_card_bg_color: #212122;
--wi_card_text_color: #ffffff;
--wi_card_tag_bg_color: #295071;
--wi_card_tag_text_color: #ffffff;
--wi_tag_color: #337ab7;
--wi_tag_text_color: #ffffff;
/*Popup*/
--popup_background_color: #262626;
--popup_title_bar_color: #337ab7;
--popup_title_bar_color_text: #ffffff;
--popup_item_color: #262626;
--popup_item_color_text: #ffffff;
--popup_hover_color: #688f1f;
--popup_hover_color_text: #ffffff;
--popup_selected_color: #688f1f;
--popup_selected_color_text: #ffffff;
--popup_button_color: #337ab7;
--popup_button_color_text: #ffffff;
--popup_cancel_button_color: #337ab7;
--popup_cancel_button_color_text: #ffffff;
--error: #333;
--error_text: #ffffff;
--error_title: #337ab7;
--error_title_text: #ffffff;
/*Context Bar Colors*/
--context_colors_memory: #303030;
--context_colors_authors_notes: #404040;
--context_colors_world_info: #295071;
--context_colors_prompt: #3379b7;
--context_colors_game_text: #26d721ba;
--context_colors_submit: #ffffff00;
--context_colors_unused: #ffffff24;
--context_colors_soft_prompt: #262626;
/*Parameters*/
--scrollbar-size: 6px;
--palette_card_shadow: 0;
--wi_card_shadow: 0;
--light_shadow_value: 0;
--left_menu_strong_shadow: 0;
--right_menu_light_shadow: 0;
--right_menu_strong_shadow: 0;
--radius_inputbox: 5px;
--radius_unpinned_menu: 5px;
--radius_sequence: 5px;
--radius_settings_background: 5px;
--radius_button: 5px;
--radius_alternate_button: 5px;
--radius_item_popup: 5px;
--radius_wi_card: 5px;
--radius_palette_card: 5px;
--radius_settings_button: 5px;
--tabs_rounding: 5px;
/*----------------VARIABLES--------------------*/
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left:"menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right:"menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30pxvar(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30pxautovar(--story_options_size) 30px;
--story_pinned_areas:var(--story_pinned_areas_left);
--story_pinned_area_widths:var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;}
/*----------------Custom CSS--------------------*/
/* Boxes */
#input_text, #themetext {
border-color: #999999 !important;
}
.fullwidth {
border-radius: 5px;
border-color: #cccccc !important;
}
.bias_phrase input {
border-radius: 5px;
border-color: #376590;
background-color:var(--dropdown_background);
}
.substitution-card input {
border-radius: 5px;
border-color: #376590;
background-color:var(--dropdown_background);
}
.sequence {
border-color: #959595 !important;
}
.sequence:hover {
filter: brightness(120%) !important;
}
/* Buttons */
#adventure_mode {
border: 1px solid;
border-color: var(--enabled_button_border_color);
}
.action_button:focus {
box-shadow: none;
outline: none !important;
color: #ffffff !important;
}
.settings_button {
border: none !important;
}
.settings_button:hover {
filter: brightness(80%) !important;
}
.settings_button:focus {
outline: none !important;
}
.wi_add_button:hover {
filter: brightness(80%);
}
.settings_button[story_gamesaved="true"] {
filter: brightness(40%) !important;
}
#new-sub-card:hover {
filter: brightness(120%) !important;
}
#debug-dump:hover {
filter: brightness(80%) !important;
}
/* Context Menu */
#context-menu {
border-color: #12324f !important;
background-color: #295071 !important;
}
#context-menu > hr {
border-color: #12324f !important;
}
.context-menu-item:hover {
background-color:#12324f !important;
}
/* Context Viewer */
.popup-header {
background-color: #262626 !important;
}
#context-container {
background-color: #404040 !important;
}
/* Finder */
.result-highlight {
background-color: #688f1f !important;
}
/* Flyout */
.rightSideMenu {
background-color: #242424 !important;
}
/* Icons */
.collapsable_header .material-icons-outlined {
transform: translateY(7px) translateX(4px);
}
.substitution-card > .card-left > .material-icons-outlined {
color: #999 !important;
}
.substitution-card > .card-left > .material-icons-outlined:hover {
color: #fff !important;
}
.true-t + label::before {
filter: brightness(85%);
color: #999 !important;
}
.true-t:checked + label::before {
filter: brightness(100%);
color: #fff !important;
}
/* Import */
a {
color: #3bf723 !important;
}
.form-control {
border: none;
border-radius: 0px;
color: #ffffff !important;
transform: translateX(5px)!important;
background-color: #404040 !important;
}
/* Lines */
.story_title, hr {
border-color: #12324f !important;
}
.setting_container {
border: 1px solid #12324f;
}
.setting_container_single, .setting_container_single_wide {
border: 1px solid #12324f;
}
/* Lists */
.settings_select, .var_sync_system_theme_list {
border-radius: 5px !important;
border-color: #2e6594 !important;
}
/* Palette */
#palette_area {
border: 1px solid #12324f;
}
/* Samplers Order */
.setting_container_single:hover {
background-color: #262626 !important;
}
/* Popup */
.popup .action_button {
border-radius: 5px;
border-color:#337ab7;
}
.popup_load_cancel_button {
border-radius: 5px;
border-color:#337ab7;
}
.popup .action_button:hover {
background-color: #337ab7 !important;
}
.popup_load_cancel_button:hover {
filter: brightness(80%);
color: #ffffff !important;
border-color: #337ab7 !important;
background-color: #337ab7 !important;
}
.popup .action_button:focus {
outline: none !important;
color: #ffffff;
}
.popup_load_cancel_button:focus {
outline: none !important;
box-shadow: none !important;
color: #ffffff !important;
border-color: #337ab7 !important;
background-color: #337ab7 !important;
}
.popup .popup_load_cancel {
background-color: #295071 !important;
}
#error_message.popup .btn-primary {
color: #ffffff !important;
border-color: #ffffff !important;
background-color: #337ab7 !important;
}
.oi[folder]:hover {
color: #ef2929;
}
.rename_icon:hover {
color: #fce94f !important;
}
/* Tabs */
.tabrow span {
border: none !important;
box-shadow: inset 0px 0px 2px #20344e !important;
}
.tabrow span::before, .tabrow span::after {
display: none;
border: none !important;
}
.tabrow span:hover {
background: #98bcdb !important;
}
.tabrow span.selected {
color: #ffffff !important;
background: #98bcdb !important;
}
/* Text */
.help_text {
opacity: 1.0 !important;
}
.rawtext {
font-family: helvetica !important;
}
.rawtext:focus {
outline: none !important;
}
/* Timer Bar */
div#settings_footer.settings_footer {
color: #ffffff !important;
border-color: #12324f !important;
background-color: #295071 !important;
}
/* Tooltips */
.tooltip-standard {
border: 1px solid #3379b7 !important;
border-radius: 5px;
}
/* World Info */
.tag:hover {
filter: brightness(80%);
}
.world_info_tag_area {
border-color: #999999!important;
filter: brightness(100%) !important;
background-color: #404040 !important;
}
.world_info_text {
border: 1px solid #cccccc!important;
filter: brightness(100%) !important;
background-color: #404040 !important;
}
.world_info_title:focus {
outline: none !important;
}
.WI_Folder_Header .title:focus {
outline: none !important;
}

592
themes/Unicorn.css Normal file
View File

@@ -0,0 +1,592 @@
/*
Name: Unicorn
Author: LightSaveUs
Version: 0.4.1
Description: A theme inspired by the DreamilyAI interface.
*/
:root {
/*----------------Palette Theme--------------------*/
--primary_palette:#000000;
--on_primary_palette: #000000;
--primary_container_palette: #000000;
--on_primary_container_palette: #000000;
--secondary_palette: #000000;
--on_secondary_palette: #000000;
--secondary_container_palette: #000000;
--on_secondary_container_palette: #000000;
--tertiary_palette: #000000;
--on_tertiary_palette: #000000;
--tertiary_container_palette: #000000;
--on_tertiary_container_palette: #000000;
--background_palette: #000000;
--on_background_palette:#000000;
--layer1_palette: #000000;
--layer2_palette: #000000;
--layer3_palette: #000000;
--layer4_palette: #000000;
--outline_palette: #000000;
--middle_palette: #000000;
--on_middle_palette: #000000;
--surface_palette: #000000;
--on_surface_palette: #000000;
/*----------------Advanced Theme--------------------*/
/*General*/
--background: #f0f0f0;
--gamescreen_background: #f0f0f0;
--gamescreen_text: #3d3d3d;
--input_background: #f0f0f0;
--input_text: #3d3d3d;
--text: #f2f1f1;
--text_to_ai_color: #3d3d3d;
--text_edit: #3d3d3d;
--action_mode_input: #33E978;
--statusbar_color: #fde0e080;
--statusbar_text_color: white;
--scrollbar-color: #da4f5a00;
/*Buttons*/
/*General*/
--enabled_button_text: #f2f1f1;
--enabled_button_background_color: #e26771;
--enabled_button_border_color: #99454c;
--disabled_button_text: #303030;
--disabled_button_background_color: #495762;
--disabled_button_border_color: #686c68;
/*Home Tab*/
--button_text: #f2f1f1;
--button_background: #e26771;
/*Alternate Button*/
--alternate_button_text: #f2f1f1;
--alternate_button_background: #e26771;
/*Sequence, AKA Gens Per Action*/
--sequence_area_background: #ebebeb;
--sequence_background: #f0f0f0;
--sequence_text: #3d3d3d;
/*Side Menus*/
--tab_color: #eee;
--tab_text: #e26771;
--flyout_background: #ebebeb;
--flyout_background_pinned: #ebebeb;
--flyout_text: #3d3d3d;
--setting_background: #eed9d9;
--setting_text: #3d3d3d;
--sample_order_select_color: #e26771;
--sample_order_select_color_text: #f2f1f1;
--dropdown_text: #e26771;
--dropdown_background: #eeeeee;
--rangeslider_background_color: #828282;
--rangeslider_color: #e26771;
--rangeslider_circle_color: #f2f1f1;
--help_icon: #f6f6f6;
--tooltip_text: #3d3d3d;--tooltip_background: #f0f0f0;
--setting_category_help_text_color: #3d3d3d;
--setting_footer_border_color: #acacac;
--setting_footer_text_color: #3d3d3d;
--setting_footer_background_color: #e1d1d1;
/*Palette Card*/
--palette_card_background: #e9cece;
--palette_card_text: #5c5c5c;
--palette_table_border: #5c5c5c;
/*World Info*/
--wi_card_border_color: #acacac;
--wi_card_border_color_to_ai: #e2677180;
--wi_card_bg_color: #ebebeb;
--wi_card_text_color: #5c5c5c;
--wi_card_tag_bg_color: #ebebeb;
--wi_card_tag_text_color: #5c5c5c;
--wi_tag_color: #ebebeb;
--wi_tag_text_color: #e26771;
/*Popup*/
--popup_background_color: #eeeeee;
--popup_title_bar_color: #eeeeee;
--popup_title_bar_color_text: #e26771;
--popup_item_color: #eeeeee;
--popup_item_color_text: #3d3d3d;
--popup_hover_color: #e9cece;
--popup_hover_color_text: #e26771;
--popup_selected_color: #e9cece;
--popup_selected_color_text: #e26771;
--popup_button_color: #e26771;
--popup_button_color_text: #eeeeee;
--popup_cancel_button_color: #eeeeee;
--popup_cancel_button_color_text: #e26771;
--error: #eeeeee;
--error_text: #3d3d3d;
--error_title: #eeeeee;
--error_title_text: #e26771;
/*Context Bar Colors*/
--context_colors_memory: #3d3d3d;
--context_colors_authors_notes: #5c5c5c;
--context_colors_world_info: #b1535a;
--context_colors_prompt: #e1636e;
--context_colors_game_text: #c89797;
--context_colors_submit: #ffffff00;
--context_colors_unused: #ffffff5c;
--context_colors_soft_prompt: #222224;
/*Parameters*/
--scrollbar-size: 6px;
--palette_card_shadow: 0;
--wi_card_shadow: 0;
--light_shadow_value: 0;
--left_menu_light_shadow: 0;
--left_menu_strong_shadow: 0;
--right_menu_light_shadow: 0;
--right_menu_strong_shadow: 0;
--popup_shadow: 0;
--radius_inputbox: 15px;
--radius_unpinned_menu: 0px;
--radius_sequence: 15px;
--radius_settings_background: 10px;
--radius_button: 10px;
--radius_alternate_button: 10px;
--radius_item_popup: 10px;
--radius_wi_card: 10px;
--radius_palette_card: 10px;
--radius_settings_button: 10px;
--tabs_rounding: 10px;
/*----------------VARIABLES--------------------*/
--flyout_menu_closed_width: 0px;
--setting_menu_closed_width_no_pins_width: 0px;
--story_options_size: 30%;
--story_pinned_areas_left:"menuicon options gamescreen lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_areas_right:"menuicon gamescreen options lefticon"
"menuicon theme theme lefticon"
"menuicon inputrow inputrow lefticon";
--story_pinned_area_widths_left: 30pxvar(--story_options_size) auto 30px;
--story_pinned_area_widths_right: 30pxautovar(--story_options_size) 30px;
--story_pinned_areas:var(--story_pinned_areas_left);
--story_pinned_area_widths:var(--story_pinned_area_widths_left);
--font_size_adjustment: 0px;
--game_screen_font_size_adjustment: 1;}
/*----------------Custom CSS--------------------*/
/* Boxes */
#input_text, #themetext {
border-color: #c9c9c9 !important;
}
.fullwidth {
border-radius: 10px;
border-color: #c9c9c9 !important;
}
.fullwidth:focus {
border-color: #e26771 !important;
}
.bias_phrase input {
border-color: #c9c9c9 !important;
border-radius: 10px;
}
.bias_phrase input:focus {
border-color: #e26771 !important;
}
.sequence {
border-color: #dddddd !important;
}
.sequence:hover {
filter: brightness(100%) !important;
}
.substitution-card input {
border-radius: 10px;
}
.substitution-card input:focus {
border-color: #e26771 !important;
}
/* Buttons */
#adventure_mode {
border: 1px solid;
border-color: var(--enabled_button_border_color);
}
.action_button:hover {
filter: brightness(100%) !important;
}
.action_button:active {
color: #f2f1f1 !important;
}
.action_button:focus {
box-shadow: none;
outline: none !important;
color: #f2f1f1 !important;
}
.settings_button {
border: none !important;
}
.settings_button:hover {
background-color: #ee7e7e !important;
}
.settings_button:focus {
outline: none !important;
}
.btn-success {
color: #f2f1f1 !important;
border-color: #e26771 !important;
background-color: #e26771 !important;
}
.toggle-off.btn {
color: #f2f1f1 !important;
border-color: #828282 !important;
background-color: #828282 !important;
}
.advanced_theme:hover {
filter: brightness(100%) !important;
background-color: #ee7e7e !important;
}
.wi_add_button {
color: #e26771 !important;
background: #ebebeb !important;
border: 1px solid #e26771 !important;
}
.wi_add_button:hover {
color: #ebebeb !important;
background: #e26771 !important;
border: 1px solid #e26771 !important;
}
#new-sub-card {
border-radius: 10px !important;
}
#debug-dump:hover {
color: #ee7e7e !important;
}
/* Context Menu */
#context-menu {
color: #3d3d3d;
border-color: #c9c9c9 !important;
background-color: #ebebeb !important;
}
#context-menu > hr {
border-color: #c9c9c9 !important;
}
.context-menu-item:hover {
color: #3d3d3d;
background-color:#c9c9c9 !important;
}
/* Context Viewer */
.popup-header {
background-color: #a04a51 !important;
}
#context-container {
background-color: #dedede !important;
}
/* Finder */
#finder {
color: #3d3d3d !important;
}
.result-highlight {
color: #f0f0f0;
background-color: #e26771 !important;
}
.finder-wi-block {
color: #3d3d3d;
}
/* Icons */
.pinned .menu_pin, .pinned .story_menu_pin {
color: #e26771 !important;
filter: brightness(100%) !important;
}
.collapsable_header .material-icons-outlined {
transform: translateY(7px) translateX(4px);
}
.material-icons-outlined:hover {
filter: brightness(100%) !important;
}
.substitution-card > .card-left > .material-icons-outlined {
color: #999 !important;
}
.substitution-card > .card-left > .material-icons-outlined:hover {
color: #3d3d3d !important;
}
.true-t + label::before {
filter: brightness(85%);
color: #999 !important;
}
.true-t:checked + label::before {
filter: brightness(100%);
color: #e26771 !important;
}
.oi[folder] {
color: #e26771 !important;
}
.rename_icon {
color: #e26771 !important;
}
.search_icon {
color: #e26771 !important;
}
.search_icon:hover {
border-radius:5px;
background-color: #e0e0e0 !important;
}
/* Import */
a {
color: #5c5c5c !important;
}
a:hover, a:focus {
color: #e26771 !important;
}
.form-control {
border-radius: 0px;
color: #3d3d3d !important;
transform: translateX(5px)!important;
border: 1px solid #c9c9c9 !important;
background-color: #f0f0f0 !important;
}
/* Lines */
.story_title, hr {
border-color: #c9c9c9 !important;
}
.settings_footer {
border-color: #c9c9c9 !important;
}
/* Lists */
.settings_select, .var_sync_system_theme_list {
border-radius: 10px !important;
border-color: #c9c9c9 !important;
}
/* Palette */
#palette_area {
border: 1px solid #d9b7ba !important;
}
/* Popup */
.popup .model_item {
border: 1px solid #c9c9c9;
}
.popup .popup_load_cancel {
border: 1px solid #c9c9c9 !important;
}
.popup .item, .popup .title, .popup .popup_list_area {
border: 1px solid #c9c9c9;
}
.popup .item:hover {
border-color: #e26771;
}
.popup .item.selected {
border-color: #e26771;
}
.popup .action_button {
border-radius: 10px;
border-color: #e26771;
}
.popup_load_cancel_button {
border-radius: 10px;
}
.popup .action_button:hover {
color: #eeeeee;
border-color: #ee7e7e !important;
filter: brightness(100%) !important;
background-color: #ee7e7e !important;
}
.popup_load_cancel_button:hover {
color: #e26771;
border-color: #e26771 !important;
background-color: #e9cece !important;
}
.popup .action_button:focus {
box-shadow: none;
outline: none !important;
color: #eeeeee !important;
}
.popup_load_cancel_button:focus {
box-shadow: none;
outline: none !important;
color: #e26771 !important;
}
#error_message.popup .btn-primary {
color: #e26771 !important;
border-color: #e26771 !important;
background-color: #eeeeee !important;
}
#error_message.popup .btn-primary:hover {
background-color: #e9cece !important;
}
.btn-primary {
color: #e26771 !important;
border-color: #e26771 !important;
background-color: #eeeeee !important;
}
/* Tabs */
.tabrow span {
border: none !important;
box-shadow: inset 0px 0px 2px !important;
}
.tabrow span::before, .tabrow span::after {
display: none;
border: none !important;
}
.tabrow span:hover {
background: #e0e0e0 !important;
}
.tabrow span.selected {
color: #e26771 !important;
background: #eeeeee !important;
}
/* Text */
.rawtext {
font-family: helvetica !important;
}
.rawtext:focus {
outline: none !important;
}
/* Tooltips */
.tooltip-standard {
border: none !important;
}
/* World Info */
.tag {
border: 1px solid #e26771 !important
}
.tag:hover {
color: #ebebeb !important;
background: #e26771 !important;
border: 1px solid #e26771 !important;
}
.world_info_tag_area {
filter: brightness(100%) !important;
}
.world_info_delete {
color: #999 !important;
}
.world_info_text {
color: #3d3d3d !important;
border-color: #acacac !important;
filter: brightness(100%) !important;
background-color: #ebebeb !important;
}
.world_info_title:focus {
outline: none !important;
color: #e26771 !important;
}
.WI_Folder_Header .title:focus {
outline: none !important;
}

View File

@@ -0,0 +1,14 @@
.within_max_length,
#story_prompt[story_prompt_in_ai="true"] {
color: inherit;
font-weight: inherit;
}
.world_info_card.used_in_game {
border: 2px outset var(--wi_card_border_color);
}
.wi_match {
font-style: normal;
pointer-events: none;
}

View File

@@ -0,0 +1,2 @@
.show_footer_icon { display: none; }
#settings_footer { display: none; }

View File

@@ -0,0 +1 @@
#token-breakdown-container { display: none; }

View File

@@ -0,0 +1 @@
#welcome_text { display:none; pointer-events: none }

View File

@@ -54,6 +54,7 @@ import numpy as np
import collections
import _codecs
import utils
import os
from torch.nn import Module
from typing import Any, Callable, Dict, Optional, Tuple, Type, Union
@@ -93,12 +94,16 @@ class LazyTensor:
def __repr__(self):
return self.__view(repr)
def materialize(self, checkpoint: Union[zipfile.ZipFile, zipfile.ZipExtFile], map_location=None, no_grad=True) -> torch.Tensor:
def materialize(self, checkpoint: Union[zipfile.ZipFile, zipfile.ZipExtFile], map_location=None, no_grad=True, filename="pytorch_model.bin") -> torch.Tensor:
filename = os.path.basename(os.path.normpath(filename)).split('.')[0]
size = reduce(lambda x, y: x * y, self.shape, 1)
dtype = self.dtype
nbytes = size if dtype is torch.bool else size * ((torch.finfo if dtype.is_floating_point else torch.iinfo)(dtype).bits >> 3)
if isinstance(checkpoint, zipfile.ZipFile):
f = checkpoint.open(f"archive/data/{self.key}", "r")
try:
f = checkpoint.open(f"archive/data/{self.key}", "r")
except:
f = checkpoint.open(f"{filename}/data/{self.key}", "r")
f.read(self.seek_offset)
else:
f = checkpoint

View File

@@ -30,6 +30,7 @@ SOFTWARE.
import utils
import multiprocessing
import threading
from typing import Any, Callable, Dict, List, NamedTuple, Optional, Tuple, TypeVar
import progressbar
import time
@@ -51,8 +52,11 @@ from tokenizers import Tokenizer
from mesh_transformer.checkpoint import read_ckpt_lowmem
from mesh_transformer.transformer_shard import CausalTransformer, CausalTransformerShard, PlaceholderTensor
from mesh_transformer.util import to_bf16
import time
socketio = None
params: Dict[str, Any] = {}
__seed = random.randrange(2**64)
@@ -111,14 +115,33 @@ def compiling_callback() -> None:
pass
def show_spinner():
def show_spinner(queue):
bar = progressbar.ProgressBar(max_value=progressbar.UnknownLength, widgets=[progressbar.Timer(), ' ', progressbar.BouncingBar(left='[', right=']', marker='')])
i = 0
while True:
if i % 2 == 0:
queue.put(["from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU..." }, {"broadcast":True, "room":"UI_1"}])
else:
queue.put(["from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU...." }, {"broadcast":True, "room":"UI_1"}])
bar.update(i)
time.sleep(0.1)
i += 1
class Send_to_socketio(object):
def write(self, bar):
bar = bar.replace("\r", "").replace("\n", "").replace(chr(0), "")
if bar != "" and [ord(num) for num in bar] != [27, 91, 65]: #No idea why we're getting the 27, 1, 65 character set, just killing to so we can move on
#logger.info(bar)
print('\r' + bar, end='')
time.sleep(0.01)
try:
socketio.emit('from_server', {'cmd': 'model_load_status', 'data': bar.replace(" ", "&nbsp;")}, broadcast=True, room="UI_1")
except:
pass
def flush(self):
pass
__F = TypeVar("__F", bound=Callable)
__T = TypeVar("__T")
@@ -578,7 +601,7 @@ class PenalizingCausalTransformer(CausalTransformer):
compiling_callback()
numseqs = numseqs_aux.shape[0]
# These are the tokens that we don't want the AI to ever write
badwords = jnp.array(vars.badwordsids).squeeze()
badwords = jnp.array(koboldai_vars.badwordsids).squeeze()
@hk.transform
def generate_sample(context, ctx_length):
# Give the initial context to the transformer
@@ -990,7 +1013,13 @@ def read_neox_checkpoint(state, path, config, checkpoint_shards=2):
}
tqdm_length = len(static_mapping) + config["layers"]*len(layer_mapping)
bar = tqdm(total=tqdm_length, desc="Loading from NeoX checkpoint")
if socketio is None:
bar = tqdm(total=tqdm_length, desc="Loading from NeoX checkpoint")
else:
bar = tqdm(total=tqdm_length, desc="Loading from NeoX checkpoint", file=Send_to_socketio())
koboldai_vars.status_message = "Loading TPU"
koboldai_vars.total_layers = tqdm_length
koboldai_vars.loaded_layers = 0
for checkpoint_layer in range(config["layers"] + 5):
if checkpoint_layer in (1, config["layers"] + 2):
@@ -1041,6 +1070,7 @@ def read_neox_checkpoint(state, path, config, checkpoint_shards=2):
np.zeros(config["cores_per_replica"]),
)
bar.update(1)
koboldai_vars.loaded_layers+=1
for mk, mv in state["params"].items():
for pk, pv in mv.items():
if isinstance(pv, PlaceholderTensor):
@@ -1048,8 +1078,9 @@ def read_neox_checkpoint(state, path, config, checkpoint_shards=2):
print("\n\nERROR: " + error, file=sys.stderr)
raise RuntimeError(error)
koboldai_vars.status_message = ""
def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpoint=False, **kwargs) -> None:
def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpoint=False, socketio_queue=None, initial_load=False, logger=None, **kwargs) -> None:
global thread_resources_env, seq, tokenizer, network, params, pad_token_id
if "pad_token_id" in kwargs:
@@ -1057,8 +1088,8 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
elif "eos_token_id" in kwargs:
pad_token_id = kwargs["eos_token_id"]
if not hasattr(vars, "sampler_order") or not vars.sampler_order:
vars.sampler_order = utils.default_sampler_order.copy()
if not hasattr(koboldai_vars, "sampler_order") or not koboldai_vars.sampler_order:
koboldai_vars.sampler_order = utils.default_sampler_order.copy()
default_params = {
"compat": "j",
@@ -1077,7 +1108,7 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
}
params = kwargs
if vars.model == "TPUMeshTransformerGPTNeoX":
if koboldai_vars.model == "TPUMeshTransformerGPTNeoX":
default_params = {
"compat": "neox",
"layers": 44,
@@ -1096,9 +1127,9 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
# Try to convert HF config.json to MTJ config
if hf_checkpoint:
spec_path = os.path.join("maps", vars.model_type + ".json")
spec_path = os.path.join("maps", koboldai_vars.model_type + ".json")
if not os.path.isfile(spec_path):
raise NotImplementedError(f"Unsupported model type {repr(vars.model_type)}")
raise NotImplementedError(f"Unsupported model type {repr(koboldai_vars.model_type)}")
with open(spec_path) as f:
lazy_load_spec = json.load(f)
@@ -1153,7 +1184,7 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
params["transposed_linear"] = True
# Load tokenizer
if vars.model == "TPUMeshTransformerGPTNeoX":
if koboldai_vars.model == "TPUMeshTransformerGPTNeoX":
tokenizer = Tokenizer.from_file(os.path.join(path, "20B_tokenizer.json"))
def new_encode(old_encode):
def encode(s, *args, **kwargs):
@@ -1172,8 +1203,8 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
jax.host_id = jax.process_index
print("Connecting to your Colab instance's TPU", flush=True)
spinner = multiprocessing.Process(target=show_spinner, args=())
spinner.start()
old_ai_busy = koboldai_vars.aibusy
koboldai_vars.status_message = "Connecting to TPU"
if os.environ.get('COLAB_TPU_ADDR', '') != '':
tpu_address = os.environ['COLAB_TPU_ADDR'] # Colab
else:
@@ -1181,19 +1212,49 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
tpu_address = tpu_address.replace("grpc://", "")
tpu_address_without_port = tpu_address.split(':', 1)[0]
url = f'http://{tpu_address_without_port}:8475/requestversion/{driver_version}'
requests.post(url)
def check_status(url, queue):
requests.post(url)
queue.put("Done")
queue = multiprocessing.Queue()
spinner = multiprocessing.Process(target=check_status, args=(url, queue))
spinner.start()
i = 0
bar = progressbar.ProgressBar(max_value=progressbar.UnknownLength, widgets=[progressbar.Timer(), ' ', progressbar.BouncingBar(left='[', right=']', marker='')])
while True:
if not queue.empty():
queue.get()
break
if i % 20 == 0:
# socketio.emit("from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU..." }, broadcast=True, room="UI_1")
socketio_queue.put(["from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU..." }, {"broadcast":True, "room":"UI_1"}])
elif i % 10 == 0:
# socketio.emit("from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU...." }, broadcast=True, room="UI_1")
socketio_queue.put(["from_server", {'cmd': 'model_load_status', 'data': "Connecting to TPU...." }, {"broadcast":True, "room":"UI_1"}])
bar.update(i)
time.sleep(0.1)
i += 1
config.FLAGS.jax_xla_backend = "tpu_driver"
config.FLAGS.jax_backend_target = "grpc://" + tpu_address
spinner.terminate()
koboldai_vars.aibusy = old_ai_busy
print()
start_time = time.time()
cores_per_replica = params["cores_per_replica"]
seq = params["seq"]
params["optimizer"] = _DummyOptimizer()
print("to line 1246 {}s".format(time.time()-start_time))
start_time = time.time()
mesh_shape = (1, cores_per_replica)
devices = np.array(jax.devices()[:cores_per_replica]).reshape(mesh_shape)
devices = jax.devices()
devices = np.array(devices[:cores_per_replica]).reshape(mesh_shape)
thread_resources_env = maps.ResourceEnv(maps.Mesh(devices, ('dp', 'mp')), ())
maps.thread_resources.env = thread_resources_env
if initial_load:
logger.message(f"KoboldAI has finished loading and is available at the following link for UI 1: {koboldai_vars.cloudflare_link}")
logger.message(f"KoboldAI has finished loading and is available at the following link for UI 2: {koboldai_vars.cloudflare_link}/new_ui")
global shard_xmap, batch_xmap
shard_xmap = __shard_xmap()
@@ -1201,19 +1262,19 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
global badwords
# These are the tokens that we don't want the AI to ever write
badwords = jnp.array(vars.badwordsids).squeeze()
badwords = jnp.array(koboldai_vars.badwordsids).squeeze()
if not path.endswith("/"):
path += "/"
network = PenalizingCausalTransformer(params, dematerialized=True)
if not hf_checkpoint and vars.model != "TPUMeshTransformerGPTNeoX":
if not hf_checkpoint and koboldai_vars.model != "TPUMeshTransformerGPTNeoX":
network.state = read_ckpt_lowmem(network.state, path, devices.shape[1])
#network.state = network.move_xmap(network.state, np.zeros(cores_per_replica))
return
if vars.model == "TPUMeshTransformerGPTNeoX":
if koboldai_vars.model == "TPUMeshTransformerGPTNeoX":
print("\n\n\nThis model has ", f"{hk.data_structures.tree_size(network.state['params']):,d}".replace(",", " "), " parameters.\n")
read_neox_checkpoint(network.state, path, params)
return
@@ -1244,6 +1305,7 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
from tqdm.auto import tqdm
import functools
def callback(model_dict, f, **_):
if callback.nested:
return
@@ -1251,6 +1313,7 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
with zipfile.ZipFile(f, "r") as z:
try:
last_storage_key = None
zipfolder = os.path.basename(os.path.normpath(f)).split('.')[0]
f = None
current_offset = 0
if utils.current_shard == 0:
@@ -1261,7 +1324,13 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
num_tensors = len(utils.get_sharded_checkpoint_num_tensors(utils.from_pretrained_model_name, utils.from_pretrained_index_filename, **utils.from_pretrained_kwargs))
else:
num_tensors = len(model_dict)
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors")
if socketio is None:
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors")
else:
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors", file=Send_to_socketio())
koboldai_vars.status_message = "Loading model"
koboldai_vars.loaded_layers = 0
koboldai_vars.total_layers = num_tensors
if utils.num_shards is not None:
utils.current_shard += 1
@@ -1276,6 +1345,7 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
if model_spec_key is None:
model_dict[key] = torch.empty(model_dict[key].shape, dtype=model_dict[key].dtype, device="meta")
utils.bar.update(1)
koboldai_vars.loaded_layers += 1
continue
storage_key = model_dict[key].key
@@ -1283,7 +1353,10 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
last_storage_key = storage_key
if isinstance(f, zipfile.ZipExtFile):
f.close()
f = z.open(f"archive/data/{storage_key}")
try:
f = z.open(f"archive/data/{storage_key}")
except:
f = z.open(f"{zipfolder}/data/{storage_key}")
current_offset = 0
if current_offset != model_dict[key].seek_offset:
f.read(model_dict[key].seek_offset - current_offset)
@@ -1328,7 +1401,12 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
)).copy(),
np.empty(params["cores_per_replica"]),
)
koboldai_vars.loaded_layers += 1
try:
time.sleep(0.01)
except:
pass
utils.bar.update(1)
if utils.num_shards is not None and utils.current_shard < utils.num_shards:
@@ -1355,60 +1433,61 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
if utils.num_shards is None or utils.current_shard >= utils.num_shards:
utils.bar.close()
utils.bar = None
koboldai_vars.status_message = ""
callback.nested = False
if isinstance(f, zipfile.ZipExtFile):
f.close()
callback.nested = False
if os.path.isdir(vars.model.replace('/', '_')):
if os.path.isdir(koboldai_vars.model.replace('/', '_')):
import shutil
shutil.move(vars.model.replace('/', '_'), "models/{}".format(vars.model.replace('/', '_')))
shutil.move(koboldai_vars.model.replace('/', '_'), "models/{}".format(koboldai_vars.model.replace('/', '_')))
print("\n", flush=True)
with torch_lazy_loader.use_lazy_torch_load(callback=callback, dematerialized_modules=True):
if(os.path.isdir(vars.custmodpth)):
if(os.path.isdir(koboldai_vars.custmodpth)):
try:
tokenizer = AutoTokenizer.from_pretrained(vars.custmodpth, revision=vars.revision, cache_dir="cache", use_fast=False)
tokenizer = AutoTokenizer.from_pretrained(koboldai_vars.custmodpth, revision=koboldai_vars.revision, cache_dir="cache", use_fast=False)
except Exception as e:
try:
tokenizer = AutoTokenizer.from_pretrained(vars.custmodpth, revision=vars.revision, cache_dir="cache")
tokenizer = AutoTokenizer.from_pretrained(koboldai_vars.custmodpth, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
try:
tokenizer = GPT2Tokenizer.from_pretrained(vars.custmodpth, revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained(koboldai_vars.custmodpth, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=koboldai_vars.revision, cache_dir="cache")
try:
model = AutoModelForCausalLM.from_pretrained(vars.custmodpth, revision=vars.revision, cache_dir="cache")
model = AutoModelForCausalLM.from_pretrained(koboldai_vars.custmodpth, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
model = GPTNeoForCausalLM.from_pretrained(vars.custmodpth, revision=vars.revision, cache_dir="cache")
elif(os.path.isdir("models/{}".format(vars.model.replace('/', '_')))):
model = GPTNeoForCausalLM.from_pretrained(koboldai_vars.custmodpth, revision=koboldai_vars.revision, cache_dir="cache")
elif(os.path.isdir("models/{}".format(koboldai_vars.model.replace('/', '_')))):
try:
tokenizer = AutoTokenizer.from_pretrained("models/{}".format(vars.model.replace('/', '_')), revision=vars.revision, cache_dir="cache", use_fast=False)
tokenizer = AutoTokenizer.from_pretrained("models/{}".format(koboldai_vars.model.replace('/', '_')), revision=koboldai_vars.revision, cache_dir="cache", use_fast=False)
except Exception as e:
try:
tokenizer = AutoTokenizer.from_pretrained("models/{}".format(vars.model.replace('/', '_')), revision=vars.revision, cache_dir="cache")
tokenizer = AutoTokenizer.from_pretrained("models/{}".format(koboldai_vars.model.replace('/', '_')), revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
try:
tokenizer = GPT2Tokenizer.from_pretrained("models/{}".format(vars.model.replace('/', '_')), revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained("models/{}".format(koboldai_vars.model.replace('/', '_')), revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=koboldai_vars.revision, cache_dir="cache")
try:
model = AutoModelForCausalLM.from_pretrained("models/{}".format(vars.model.replace('/', '_')), revision=vars.revision, cache_dir="cache")
model = AutoModelForCausalLM.from_pretrained("models/{}".format(koboldai_vars.model.replace('/', '_')), revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
model = GPTNeoForCausalLM.from_pretrained("models/{}".format(vars.model.replace('/', '_')), revision=vars.revision, cache_dir="cache")
model = GPTNeoForCausalLM.from_pretrained("models/{}".format(koboldai_vars.model.replace('/', '_')), revision=koboldai_vars.revision, cache_dir="cache")
else:
try:
tokenizer = AutoTokenizer.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache", use_fast=False)
tokenizer = AutoTokenizer.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache", use_fast=False)
except Exception as e:
try:
tokenizer = AutoTokenizer.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache")
tokenizer = AutoTokenizer.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
try:
tokenizer = GPT2Tokenizer.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=vars.revision, cache_dir="cache")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", revision=koboldai_vars.revision, cache_dir="cache")
try:
model = AutoModelForCausalLM.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache")
model = AutoModelForCausalLM.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache")
except Exception as e:
model = GPTNeoForCausalLM.from_pretrained(vars.model, revision=vars.revision, cache_dir="cache")
model = GPTNeoForCausalLM.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache")
#network.state = network.move_xmap(network.state, np.zeros(cores_per_replica))

View File

@@ -51,6 +51,7 @@ git remote add origin %origin%
git fetch --all
git checkout %branch% -f
git reset --hard origin/%branch%
git submodule update --init --recursive
IF %M%==1 umamba.exe install --no-shortcuts -r K:\python\ -n base -f "%~dp0\environments\huggingface.yml" -y --always-copy
IF %M%==2 umamba.exe install --no-shortcuts -r miniconda3 -n base -f environments\huggingface.yml -y --always-copy
IF %M%==3 umamba.exe install --no-shortcuts -r B:\python\ -n base -f "%~dp0\environments\huggingface.yml" -y --always-copy

View File

@@ -49,7 +49,7 @@ local example_config = [[# Phrase bias
# case-sensitive, with a leading space, to appear in the output, with the
# bias increasing as each consecutive token in that phrase appears):
# 7, 25.4, 1309, 262, 3809, 286, 1842, 1011, 345, 2440
#
# #
]]
-- If config file is empty, write example config

View File

@@ -26,7 +26,7 @@ try:
except ImportError:
HAS_ACCELERATE = False
vars = None
koboldai_vars = None
args = None
num_shards: Optional[int] = None
current_shard = 0
@@ -95,14 +95,13 @@ def trimincompletesentence(txt):
#
#==================================================================#
def replaceblanklines(txt):
txt = txt.replace("\n\n", "\n")
return txt
return txt.replace("\n\n", "\n")
#==================================================================#
#
#==================================================================#
def removespecialchars(txt, vars=None):
if vars is None or vars.actionmode == 0:
def removespecialchars(txt, koboldai_vars=None):
if koboldai_vars is None or koboldai_vars.actionmode == 0:
txt = re.sub(r"[#/@%<>{}+=~|\^]", "", txt)
else:
txt = re.sub(r"[#/@%{}+=~|\^]", "", txt)
@@ -111,38 +110,38 @@ def removespecialchars(txt, vars=None):
#==================================================================#
# If the next action follows a sentence closure, add a space
#==================================================================#
def addsentencespacing(txt, vars):
def addsentencespacing(txt, koboldai_vars):
# Don't add sentence spacing if submission is empty or starts with whitespace
if(len(txt) == 0 or len(txt) != len(txt.lstrip())):
return txt
# Get last character of last action
if(len(vars.actions) > 0):
if(len(vars.actions[vars.actions.get_last_key()]) > 0):
action = vars.actions[vars.actions.get_last_key()]
if(len(koboldai_vars.actions) > 0):
if(len(koboldai_vars.actions[koboldai_vars.actions.get_last_key()]) > 0):
action = koboldai_vars.actions[koboldai_vars.actions.get_last_key()]
lastchar = action[-1] if len(action) else ""
else:
# Last action is blank, this should never happen, but
# since it did let's bail out.
return txt
else:
action = vars.prompt
action = koboldai_vars.prompt
lastchar = action[-1] if len(action) else ""
if(lastchar != " "):
txt = " " + txt
return txt
def singlelineprocessing(txt, vars):
txt = vars.regex_sl.sub('', txt)
if(len(vars.actions) > 0):
if(len(vars.actions[vars.actions.get_last_key()]) > 0):
action = vars.actions[vars.actions.get_last_key()]
def singlelineprocessing(txt, koboldai_vars):
txt = koboldai_vars.regex_sl.sub('', txt)
if(len(koboldai_vars.actions) > 0):
if(len(koboldai_vars.actions[-1]) > 0):
action = koboldai_vars.actions[-1]
lastchar = action[-1] if len(action) else ""
else:
# Last action is blank, this should never happen, but
# since it did let's bail out.
return txt
else:
action = vars.prompt
action = koboldai_vars.prompt
lastchar = action[-1] if len(action) else ""
if(lastchar != "\n"):
txt = txt + "\n"
@@ -160,14 +159,14 @@ def cleanfilename(filename):
# Newline substitution for fairseq models
#==================================================================#
def encodenewlines(txt):
if(vars.newlinemode == "s"):
if(koboldai_vars.newlinemode == "s"):
return txt.replace('\n', "</s>")
return txt
def decodenewlines(txt):
if(vars.newlinemode == "s"):
if(koboldai_vars.newlinemode == "s"):
return txt.replace("</s>", '\n')
if(vars.newlinemode == "ns"):
if(koboldai_vars.newlinemode == "ns"):
return txt.replace("</s>", '')
return txt
@@ -187,11 +186,11 @@ def _download_with_aria2(aria2_config: str, total_length: int, directory: str =
def write(self, bar):
bar = bar.replace("\r", "").replace("\n", "")
if bar != "":
if bar != "" and [ord(num) for num in bar] != [27, 91, 65]: #No idea why we're getting the 27, 1, 65 character set, just killing to so we can move on
try:
print('\r' + bar, end='')
try:
emit('from_server', {'cmd': 'model_load_status', 'data': bar.replace(" ", "&nbsp;")}, broadcast=True)
socketio.emit('from_server', {'cmd': 'model_load_status', 'data': bar.replace(" ", "&nbsp;")}, broadcast=True, room="UI_1")
except:
pass
eventlet.sleep(seconds=0)
@@ -201,8 +200,9 @@ def _download_with_aria2(aria2_config: str, total_length: int, directory: str =
pass
import transformers
aria2_port = 6799 if vars is None else vars.aria2_port
aria2_port = 6799 if koboldai_vars is None else koboldai_vars.aria2_port
lengths = {}
path = None
s = requests.Session()
s.mount("http://", requests.adapters.HTTPAdapter(max_retries=requests.adapters.Retry(total=120, backoff_factor=1)))
bar = None
@@ -220,11 +220,10 @@ def _download_with_aria2(aria2_config: str, total_length: int, directory: str =
if bar is not None:
bar.n = bar.total
bar.close()
koboldai_vars.downloaded_chunks = bar.total
p.terminate()
done = True
break
if bar is None:
bar = tqdm(total=total_length, desc=f"[aria2] Downloading model", unit="B", unit_scale=True, unit_divisor=1000, file=Send_to_socketio())
visited = set()
for x in r:
filename = x["files"][0]["path"]
@@ -233,16 +232,24 @@ def _download_with_aria2(aria2_config: str, total_length: int, directory: str =
for k, v in lengths.items():
if k not in visited:
lengths[k] = (v[1], v[1])
bar.n = sum(v[0] for v in lengths.values())
if bar is None:
bar = tqdm(total=total_length, desc=f"[aria2] Downloading model", unit="B", unit_scale=True, unit_divisor=1000, file=Send_to_socketio())
koboldai_vars.status_message = "Download Model"
koboldai_vars.total_download_chunks = sum(v[1] for v in lengths.values())
koboldai_vars.downloaded_chunks = sum(v[0] for v in lengths.values())
bar.n = koboldai_vars.downloaded_chunks
bar.update()
time.sleep(0.1)
koboldai_vars.status_message = ""
path = f.name
except Exception as e:
p.terminate()
raise e
finally:
try:
os.remove(path)
if path is not None:
if os.path.exists(path):
os.remove(path)
except OSError:
pass
code = p.wait()