Merge pull request #28 from henk717/united

Merge united
This commit is contained in:
Llama
2023-04-26 10:10:09 -07:00
committed by GitHub
14 changed files with 203 additions and 132 deletions

View File

@@ -48,7 +48,7 @@ If you would like to play KoboldAI online for free on a powerful computer you ca
Each edition features different models and requires different hardware to run, this means that if you are unable to obtain a TPU or a GPU you might still be able to use the other version. The models you can use are listed underneath the edition. To open a Colab click the big link featuring the editions name.
## [TPU Edition Model Descriptions](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)
## [Models the TPU can run:](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)
| Model | Style | Description |
| --- | --- | --- |
@@ -64,21 +64,21 @@ Each edition features different models and requires different hardware to run, t
| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-13B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger 20B model from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |
| [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) by EleutherAI | Generic | This model serves as the basis for most other 6B models (Some being based on Fairseq Dense instead). Being trained on the Pile and not biased towards anything in particular it is suitable for a variety of tasks such as writing, Q&A and coding tasks. You will likely get better result with larger generic models or finetuned models. |
## [GPU Edition Model Descriptions](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb)
## [Models the Colab GPU can run:](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/GPU.ipynb)
| Model | Style | Description |
| --- | --- | --- |
| [Nerys](https://huggingface.co/KoboldAI/fairseq-dense-2.7B-Nerys) by Mr Seeker | Novel/Adventure | Nerys is a hybrid model based on Pike (A newer Janeway), on top of the Pike dataset you also get some Light Novels, Adventure mode support and a little bit of Shinen thrown in the mix. The end result is a very diverse model that is heavily biased towards SFW novel writing, but one that can go beyond its novel training and make for an excellent adventure model to. Adventure mode is best played from a second person perspective, but can be played in first or third person as well. Novel writing can be done best from the first or third person. |
| [Erebus](https://huggingface.co/KoboldAI/OPT-2.7B-Erebus) by Mr Seeker | NSFW | Erebus is our community's flagship NSFW model, being a combination of multiple large datasets that include Literotica, Shinen and erotic novels from Nerys and featuring thourough tagging support it covers the vast majority of erotic writing styles. This model is capable of replacing both the Lit and Shinen models in terms of content and style and has been well received as (one of) the best NSFW models out there. If you wish to use this model for commercial or non research usage we recommend choosing the 20B version as that one is not subject to the restrictive OPT license. |
| [Janeway](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Janeway) by Mr Seeker | Novel | Janeway is a model created from Picard's dataset combined with a brand new collection of ebooks. This model is trained on 20% more content than Picard and has been trained on literature from various genres. Although the model is mainly focussed on SFW, romantic scenes might involve a degree of nudity. |
| [Picard](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Picard) by Mr Seeker | Novel | Picard is a model trained for SFW Novels based on Neo 2.7B. It is focused on Novel style writing without the NSFW bias. While the name suggests a sci-fi model this model is designed for Novels of a variety of genre's. It is meant to be used in KoboldAI's regular mode. |
| [AID](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-AID) by melastacho | Adventure | Also know as Adventure 2.7B this is a clone of the AI Dungeon Classic model and is best known for the epic wackey adventures that AI Dungeon Classic players love. |
| [Horni LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) by finetune | Novel | This model is based on Horni 2.7B and retains its NSFW knowledge, but was then further biased towards SFW novel stories. If you seek a balance between a SFW Novel model and a NSFW model this model should be a good choice. |
| [Horni](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni) by finetune | NSFW | This model is tuned on Literotica to produce a Novel style model biased towards NSFW content. Can still be used for SFW stories but will have a bias towards NSFW content. It is meant to be used in KoboldAI's regular mode. |
| [Shinen](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Shinen) by Mr Seeker | NSFW | Shinen is an alternative to the Horni model designed to be more explicit. If Horni is to tame for you Shinen might produce better results. While it is a Novel model it is unsuitable for SFW stories due to its heavy NSFW bias. Shinen will not hold back. It is meant to be used in KoboldAI's regular mode. |
| [OPT](https://huggingface.co/facebook/opt-2.7b) by Metaseq | Generic | OPT is considered one of the best base models as far as content goes, its behavior has the strengths of both GPT-Neo and Fairseq Dense. Compared to Neo duplicate and unnecessary content has been left out, while additional literature was added in similar to the Fairseq Dense model. The Fairseq Dense model however lacks the broader data that OPT does have. The biggest downfall of OPT is its license, which prohibits any commercial usage, or usage beyond research purposes. |
| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-2.7B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger models from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |
| [Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) by EleutherAI | Generic | This is the base model for all the other 2.7B models, it is best used when you have a use case that we have no other models available for, such as writing blog articles or programming. It can also be a good basis for the experience of some of the softprompts if your softprompt is not about a subject the other models cover. |
| [Pygmalion-6b](https://huggingface.co/PygmalionAI/pygmalion-6b) by Pygmalion AI | NSFW/Chat | Pymalion 6B is a proof-of-concept dialogue model based on EleutherAI's GPT-J-6B. Warning: This model is NOT suitable for use by minors. It will output X-rated content under certain circumstances. The fine-tuning dataset consisted of 56MB of dialogue data gathered from multiple sources, which includes both real and partially machine-generated conversations. |
| [Nerys-6b](https://huggingface.co/KoboldAI/OPT-6B-nerys-v2) by Mr Seeker | Novel/Adventure | Nerys is a hybrid model based on Pike (A newer Janeway), on top of the Pike dataset you also get some Light Novels, Adventure mode support and a little bit of Shinen thrown in the mix. The end result is a very diverse model that is heavily biased towards SFW novel writing, but one that can go beyond its novel training and make for an excellent adventure model to. Adventure mode is best played from a second person perspective, but can be played in first or third person as well. Novel writing can be done best from the first or third person. |
| [Erebus-6.7b](https://huggingface.co/KoboldAI/OPT-6.7B-Erebus) by Mr Seeker | NSFW | Erebus is our community's flagship NSFW model, being a combination of multiple large datasets that include Literotica, Shinen and erotic novels from Nerys and featuring thourough tagging support it covers the vast majority of erotic writing styles. This model is capable of replacing both the Lit and Shinen models in terms of content and style and has been well received as (one of) the best NSFW models out there. If you wish to use this model for commercial or non research usage we recommend choosing the 20B version as that one is not subject to the restrictive OPT license. |
| [Skein-6b](https://huggingface.co/KoboldAI/GPT-J-6B-Skein) by Mr Seeker | Adventure | This model is designed for creative story generation. It can understand both free-form text and text written in interactive fiction style with actions starting with "> You". Trained with light novels and assorted interactive fiction. |
| [Adventure 6b](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by Mr Seeker | Adventure | This is a clone of the AI Dungeon Classic model and is best known for the epic wackey adventures that AI Dungeon Classic players love. |
| [PPO-Pygway-6b](https://huggingface.co/KoboldAI/PPO_Pygway-6b-Mix) by TeH_Venom | Instruct-tuned Chat | PPO-Pygway is a model that merges together KoboldAI/GPT-J-6B-Janeway, reciprocate/ppo_hh_gpt-j, and Pygmalion/Pygmalion-6b; all three models were blended in a two step process using a simple weighted parameter method. This model may generate NSFW content. |
| [Janeway-6b](https://huggingface.co/KoboldAI/GPT-J-6B-Janeway) by Mr Seeker | Novel | GPT-J 6B-Janeway is a finetune created using EleutherAI's GPT-J 6B model. The training data contains around 2210 ebooks, mostly in the sci-fi and fantasy genres. The dataset is based on the same dataset used by GPT-Neo-2.7B-Picard, with 20% more data in various genres.|
| [Lit-6b](https://huggingface.co/hakurei/lit-6B) by Hakurei | Novel | Lit-6B is a GPT-J 6B model fine-tuned on 2GB of a diverse range of light novels, erotica, and annotated literature for the purpose of generating novel-like fictional text. The model used for fine-tuning is GPT-J, which is a 6 billion parameter auto-regressive language model trained on The Pile. |
| [Nerybus-6.7b](https://huggingface.co/KoboldAI/OPT-6.7B-Nerybus-Mix) by Concedo | Novel/NSFW | This model is based on OPT-6.7b-Erebus and Merged with Nerys retaining its NSFW knowledge, but was then further biased towards SFW novel stories. If you seek a balance between a SFW Novel model and a NSFW model this model should be a good choice. |
| [Shinen-6b](https://huggingface.co/KoboldAI/GPT-J-6B-Shinen) by Mr Seeker | Novel/NSFW | Shinen is an alternative to the OPT-license based Erebus model. While it is a Novel model it is unsuitable for SFW stories due to its heavy NSFW bias. Shinen will not hold back. It is meant to be used in KoboldAI's regular mode. |
| [Various 2.7b models]() by various | Various smaller models are also possible to load in GPU colab. | |
### Styles

View File

@@ -73,6 +73,8 @@ import torch
from transformers import StoppingCriteria, GPT2Tokenizer, GPT2LMHeadModel, GPTNeoForCausalLM, GPTNeoModel, AutoModelForCausalLM, AutoModelForSeq2SeqLM, AutoTokenizer, PreTrainedModel, modeling_utils, AutoModelForTokenClassification
from transformers import __version__ as transformers_version
import transformers
import ipaddress
from functools import wraps
try:
from transformers.models.opt.modeling_opt import OPTDecoder
except:
@@ -86,6 +88,9 @@ from PIL import Image
from io import BytesIO
global tpu_mtj_backend
global allowed_ips
allowed_ips = set() # empty set
enable_whitelist = False
if lupa.LUA_VERSION[:2] != (5, 4):
@@ -103,15 +108,14 @@ def new_init(self, *args, **kwargs):
self.ncols = 99
tqdm.__init__ = new_init
# Fix some issues with the OPT tokenizer
# Add _koboldai_header support for some optional tokenizer fixes
# This used to be an OPT tokenizer fix, this has been moved search for "# These are model specific overrides if a model has bad defaults" for the new section
from transformers import PreTrainedTokenizerBase
old_pretrainedtokenizerbase_from_pretrained = PreTrainedTokenizerBase.from_pretrained.__func__
@classmethod
def new_pretrainedtokenizerbase_from_pretrained(cls, *args, **kwargs):
tokenizer = old_pretrainedtokenizerbase_from_pretrained(cls, *args, **kwargs)
tokenizer._koboldai_header = tokenizer.encode("")
tokenizer.add_bos_token = False
tokenizer.add_prefix_space = False
tokenizer._koboldai_header = []
return tokenizer
PreTrainedTokenizerBase.from_pretrained = new_pretrainedtokenizerbase_from_pretrained
@@ -1123,7 +1127,7 @@ def move_model_to_devices(model):
for key, value in model.state_dict().items():
target_dtype = torch.float32 if breakmodel.primary_device == "cpu" else torch.float16
if(value.dtype is not target_dtype):
accelerate.utils.set_module_tensor_to_device(model, key, target_dtype)
accelerate.utils.set_module_tensor_to_device(model, key, torch.device(value.device), value, target_dtype)
disk_blocks = breakmodel.disk_blocks
gpu_blocks = breakmodel.gpu_blocks
ram_blocks = len(utils.layers_module_names) - sum(gpu_blocks)
@@ -1469,13 +1473,15 @@ def spRequest(filename):
#==================================================================#
def general_startup(override_args=None):
global args
global enable_whitelist
global allowed_ips
# Parsing Parameters
parser = argparse.ArgumentParser(description="KoboldAI Server")
parser.add_argument("--remote", action='store_true', help="Optimizes KoboldAI for Remote Play")
parser.add_argument("--noaimenu", action='store_true', help="Disables the ability to select the AI")
parser.add_argument("--ngrok", action='store_true', help="Optimizes KoboldAI for Remote Play using Ngrok")
parser.add_argument("--localtunnel", action='store_true', help="Optimizes KoboldAI for Remote Play using Localtunnel")
parser.add_argument("--host", action='store_true', help="Optimizes KoboldAI for Remote Play without using a proxy service")
parser.add_argument("--host", type=str, default="Disabled", nargs="?", const="", help="Optimizes KoboldAI for LAN Remote Play without using a proxy service. --host opens to all LAN. Enable IP whitelisting by using a comma separated IP list. Supports individual IPs, ranges, and subnets --host 127.0.0.1,127.0.0.2,127.0.0.3,192.168.1.0-192.168.1.255,10.0.0.0/24,etc")
parser.add_argument("--port", type=int, help="Specify the port on which the application will be joinable")
parser.add_argument("--aria2_port", type=int, help="Specify the port on which aria2's RPC interface will be open if aria2 is installed (defaults to 6799)")
parser.add_argument("--model", help="Specify the Model Type to skip the Menu")
@@ -1525,6 +1531,10 @@ def general_startup(override_args=None):
utils.args = args
#load system and user settings
for setting in ['user_settings', 'system_settings']:
@@ -1602,9 +1612,30 @@ def general_startup(override_args=None):
if args.localtunnel:
koboldai_vars.host = True;
if args.host:
koboldai_vars.host = True;
args.unblock = True;
if args.host != "Disabled":
# This means --host option was submitted without an argument
# Enable all LAN IPs (0.0.0.0/0)
koboldai_vars.host = True
args.unblock = True
if args.host != "":
# Check if --host option was submitted with an argument
# Parse the supplied IP(s) and add them to the allowed IPs list
enable_whitelist = True
for ip_str in args.host.split(","):
if "/" in ip_str:
allowed_ips |= set(str(ip) for ip in ipaddress.IPv4Network(ip_str, strict=False).hosts())
elif "-" in ip_str:
start_ip, end_ip = ip_str.split("-")
start_ip_int = int(ipaddress.IPv4Address(start_ip))
end_ip_int = int(ipaddress.IPv4Address(end_ip))
allowed_ips |= set(str(ipaddress.IPv4Address(ip)) for ip in range(start_ip_int, end_ip_int + 1))
else:
allowed_ips.add(ip_str.strip())
# Sort and print the allowed IPs list
allowed_ips = sorted(allowed_ips, key=lambda ip: int(''.join([i.zfill(3) for i in ip.split('.')])))
print(f"Allowed IPs: {allowed_ips}")
if args.cpu:
koboldai_vars.use_colab_tpu = False
@@ -2917,7 +2948,7 @@ def load_model(use_gpu=True, gpu_layers=None, disk_layers=None, initial_load=Fal
koboldai_vars.status_message = "Loading model"
koboldai_vars.total_layers = num_tensors
koboldai_vars.loaded_layers = 0
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors", file=Send_to_socketio())
utils.bar = tqdm(total=num_tensors, desc="Loading model tensors", file=Send_to_socketio(), position=1)
with zipfile.ZipFile(f, "r") as z:
try:
@@ -3219,10 +3250,14 @@ def load_model(use_gpu=True, gpu_layers=None, disk_layers=None, initial_load=Fal
# koboldai_vars.badwordsids.append([vocab[key]])
# These are model specific overrides if a model has bad defaults
tokenizer._koboldai_header = []
if koboldai_vars.model_type == "llama":
tokenizer.decode_with_prefix_space = True
tokenizer.add_bos_token = False
if koboldai_vars.model_type == "opt":
tokenizer._koboldai_header = tokenizer.encode("")
tokenizer.add_bos_token = False
tokenizer.add_prefix_space = False
logger.info(f"Pipeline created: {koboldai_vars.model}")
else:
@@ -3450,22 +3485,53 @@ def load_model(use_gpu=True, gpu_layers=None, disk_layers=None, initial_load=Fal
print(format(colors.GREEN) + "KoboldAI has finished loading and is available at the following link for UI 1: " + koboldai_vars.cloudflare_link + format(colors.END))
print(format(colors.GREEN) + "KoboldAI has finished loading and is available at the following link for UI 2: " + koboldai_vars.cloudflare_link + "/new_ui" + format(colors.END))
# Setup IP Whitelisting
# Define a function to check if IP is allowed
def is_allowed_ip():
global allowed_ips
client_ip = request.remote_addr
if request.path != '/genre_data.json':
print("Connection Attempt: " + request.remote_addr)
if allowed_ips:
print("Allowed?: ", request.remote_addr in allowed_ips)
return client_ip in allowed_ips
# Define a decorator to enforce IP whitelisting
def require_allowed_ip(func):
@wraps(func)
def decorated(*args, **kwargs):
if enable_whitelist and not is_allowed_ip():
return abort(403)
return func(*args, **kwargs)
return decorated
# Set up Flask routes
@app.route('/')
@app.route('/index')
@require_allowed_ip
def index():
if args.no_ui:
return redirect('/api/latest')
else:
return render_template('index.html', hide_ai_menu=args.noaimenu)
@app.route('/api', strict_slashes=False)
@require_allowed_ip
def api():
return redirect('/api/latest')
@app.route('/favicon.ico')
def favicon():
return send_from_directory(app.root_path,
'koboldai.ico', mimetype='image/vnd.microsoft.icon')
@app.route('/download')
@require_allowed_ip
def download():
if args.no_ui:
raise NotFound()
@@ -4126,6 +4192,9 @@ def execute_outmod():
#==================================================================#
@socketio.on('connect')
def do_connect():
print("Connection Attempt: " + request.remote_addr)
if allowed_ips:
print("Allowed?: ", request.remote_addr in allowed_ips)
if request.args.get("rely") == "true":
return
logger.info("Client connected! UI_{}".format(request.args.get('ui')))
@@ -7388,13 +7457,13 @@ def loadRequest(loadpath, filename=None):
if not loadpath:
return
#Original UI only sends the story name and assumes it's always a .json file... here we check to see if it's a directory to load that way
if not os.path.exists(loadpath):
if not isinstance(loadpath, dict) and not os.path.exists(loadpath):
if os.path.exists(loadpath.replace(".json", "")):
loadpath = loadpath.replace(".json", "")
if os.path.isdir(loadpath):
if not isinstance(loadpath, dict) and os.path.isdir(loadpath):
if not valid_v3_story(loadpath):
raise RuntimeError(f"Tried to load {loadpath}, a non-save directory.")
koboldai_vars.update_story_path_structure(loadpath)
@@ -8021,6 +8090,7 @@ def show_folder_usersripts(data):
# UI V2 CODE
#==================================================================#
@app.route('/new_ui')
@require_allowed_ip
@logger.catch
def new_ui_index():
if args.no_ui:
@@ -8048,6 +8118,7 @@ def ui2_connect():
# UI V2 CODE Themes
#==================================================================#
@app.route('/themes/<path:path>')
#@require_allowed_ip
@logger.catch
def ui2_serve_themes(path):
return send_from_directory('themes', path)
@@ -8086,6 +8157,7 @@ def upload_file(data):
get_files_folders(session['current_folder'])
@app.route("/upload_kai_story/<string:file_name>", methods=["POST"])
@require_allowed_ip
@logger.catch
def UI_2_upload_kai_story(file_name: str):
@@ -8549,6 +8621,7 @@ def directory_to_zip_data(directory: str, overrides: Optional[dict]) -> bytes:
# Save story to json
#==================================================================#
@app.route("/story_download")
@require_allowed_ip
@logger.catch
def UI_2_download_story():
if args.no_ui:
@@ -9011,6 +9084,7 @@ def UI_2_delete_wi_folder(folder):
# Event triggered when user exports world info folder
#==================================================================#
@app.route('/export_world_info_folder')
@require_allowed_ip
@logger.catch
def UI_2_export_world_info_folder():
if 'folder' in request.args:
@@ -9038,6 +9112,7 @@ def UI_2_upload_world_info_folder(data):
koboldai_vars.calc_ai_text()
@app.route("/upload_wi", methods=["POST"])
@require_allowed_ip
@logger.catch
def UI_2_import_world_info():
wi_data = request.get_json()
@@ -9115,6 +9190,7 @@ def UI_2_update_wi_keys(data):
socketio.emit("world_info_entry", koboldai_vars.worldinfo_v2.world_info[uid], broadcast=True, room="UI_2")
@app.route("/set_wi_image/<int(signed=True):uid>", methods=["POST"])
@require_allowed_ip
@logger.catch
def UI_2_set_wi_image(uid):
if uid < 0:
@@ -9146,6 +9222,7 @@ def UI_2_set_wi_image(uid):
return ":)"
@app.route("/get_wi_image/<int(signed=True):uid>", methods=["GET"])
@require_allowed_ip
@logger.catch
def UI_2_get_wi_image(uid):
if args.no_ui:
@@ -9157,6 +9234,7 @@ def UI_2_get_wi_image(uid):
return ":( Couldn't find image", 204
@app.route("/set_commentator_picture/<int(signed=True):commentator_id>", methods=["POST"])
@require_allowed_ip
@logger.catch
def UI_2_set_commentator_image(commentator_id):
data = request.get_data()
@@ -9165,6 +9243,7 @@ def UI_2_set_commentator_image(commentator_id):
return ":)"
@app.route("/image_db.json", methods=["GET"])
@require_allowed_ip
@logger.catch
def UI_2_get_image_db():
if args.no_ui:
@@ -9175,6 +9254,7 @@ def UI_2_get_image_db():
return jsonify([])
@app.route("/action_composition.json", methods=["GET"])
@require_allowed_ip
@logger.catch
def UI_2_get_action_composition():
if args.no_ui:
@@ -9200,6 +9280,7 @@ def UI_2_get_action_composition():
return jsonify(ret)
@app.route("/generated_images/<path:path>")
@require_allowed_ip
def UI_2_send_generated_images(path):
return send_from_directory(koboldai_vars.save_paths.generated_images, path)
@@ -9509,6 +9590,7 @@ def UI_2_generate_wi(data):
socketio.emit("generated_wi", {"uid": uid, "field": field, "out": out_text}, room="UI_2")
@app.route("/generate_raw", methods=["GET"])
@require_allowed_ip
def UI_2_generate_raw():
prompt = request.args.get("prompt")
@@ -10210,6 +10292,7 @@ def UI_2_privacy_mode(data):
# Genres
#==================================================================#
@app.route("/genre_data.json", methods=["GET"])
@require_allowed_ip
def UI_2_get_applicable_genres():
with open("data/genres.json", "r") as file:
genre_list = json.load(file)
@@ -10275,12 +10358,14 @@ def UI_2_get_log(data):
emit("log_message", web_log_history)
@app.route("/get_log")
@require_allowed_ip
def UI_2_get_log_get():
if args.no_ui:
return redirect('/api/latest')
return {'aiserver_log': web_log_history}
@app.route("/test_match")
@require_allowed_ip
@logger.catch
def UI_2_test_match():
koboldai_vars.assign_world_info_to_actions()
@@ -10290,6 +10375,7 @@ def UI_2_test_match():
# Download of the audio file
#==================================================================#
@app.route("/audio")
@require_allowed_ip
@logger.catch
def UI_2_audio():
if args.no_ui:
@@ -10316,6 +10402,7 @@ def UI_2_audio():
# Download of the image for an action
#==================================================================#
@app.route("/action_image")
@require_allowed_ip
@logger.catch
def UI_2_action_image():
if args.no_ui:
@@ -10375,6 +10462,7 @@ def model_info():
return {"Model Type": "Read Only", "Model Size": "0", "Model Name": koboldai_vars.model.replace("_", "/")}
@app.route("/vars")
@require_allowed_ip
@logger.catch
def show_vars():
if args.no_ui:

View File

@@ -39,8 +39,6 @@
"\n",
"For more information about KoboldAI check our our Github readme : https://github.com/KoboldAI/KoboldAI-Client/blob/main/readme.md\n",
"\n",
"For the larger AI models (That are typically more coherent) check out our **[TPU edition](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)**!\n",
"\n",
"---\n",
"## How to load KoboldAI: Everything you need to know\n",
"1. On a phone? First put your browser in desktop mode because of a Google Colab bug. Otherwise nothing will happen when you click the play button. Then tap the play button next to \"<-- Tap This if you play on Mobile\", you will see an audio player. Keep the audio player playing so Colab does not get shut down in the background.\n",
@@ -82,7 +80,7 @@
"#@title <b><-- Select your model below and then click this to start KoboldAI</b>\n",
"#@markdown You can find a description of the models below along with instructions on how to start KoboldAI.\n",
"\n",
"Model = \"Nerys V2 6B\" #@param [\"Nerys V2 6B\", \"Erebus 6B\", \"Skein 6B\", \"Janeway 6B\", \"Adventure 6B\", \"Pygmalion 6B\", \"Pygmalion 6B Dev\", \"Lit V2 6B\", \"Lit 6B\", \"Shinen 6B\", \"Nerys 2.7B\", \"AID 2.7B\", \"Erebus 2.7B\", \"Janeway 2.7B\", \"Picard 2.7B\", \"Horni LN 2.7B\", \"Horni 2.7B\", \"Shinen 2.7B\", \"OPT 2.7B\", \"Fairseq Dense 2.7B\", \"Neo 2.7B\"] {allow-input: true}\n",
"Model = \"Nerys V2 6B\" #@param [\"Nerys V2 6B\", \"Nerybus 6B\", \"Erebus 6B\", \"Skein 6B\", \"Janeway 6B\", \"Adventure 6B\", \"PPO_Pygway 6B\", \"Lit V2 6B\", \"Lit 6B\", \"Shinen 6B\", \"Nerys 2.7B\", \"AID 2.7B\", \"Erebus 2.7B\", \"Janeway 2.7B\", \"Picard 2.7B\", \"Horni LN 2.7B\", \"Horni 2.7B\", \"Shinen 2.7B\", \"OPT 2.7B\", \"Fairseq Dense 2.7B\", \"Neo 2.7B\"] {allow-input: true}\n",
"Version = \"Official\" #@param [\"Official\", \"United\"] {allow-input: true}\n",
"Provider = \"Cloudflare\" #@param [\"Localtunnel\", \"Cloudflare\"]\n",
"use_google_drive = True #@param {type:\"boolean\"}\n",
@@ -104,6 +102,10 @@
" Model = \"KoboldAI/OPT-6B-nerys-v2\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Nerybus 6B\":\n",
" Model = \"KoboldAI/OPT-6.7B-Nerybus-Mix\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Erebus 6B\":\n",
" Model = \"KoboldAI/OPT-6.7B-Erebus\"\n",
" path = \"\"\n",
@@ -120,21 +122,11 @@
" Model = \"KoboldAI/GPT-J-6B-Adventure\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Pygmalion 6B\":\n",
" Model = \"PygmalionAI/pygmalion-6b\"\n",
"elif Model == \"PPO_Pygway 6B\":\n",
" Model = \"KoboldAI/PPO_Pygway-6b-Mix\"\n",
" path = \"\"\n",
" download = \"\"\n",
" Version = \"United\"\n",
"elif Model == \"Pygmalion 6B Dev\":\n",
" Model = \"PygmalionAI/pygmalion-6b\"\n",
" Revision = \"--revision dev\"\n",
" path = \"\"\n",
" Version = \"United\"\n",
" download = \"\"\n",
"elif Model == \"Lit V2 6B\":\n",
" Model = \"hakurei/litv2-6B-rev3\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Lit 6B\":\n",
" Model = \"hakurei/lit-6B\"\n",
" path = \"\"\n",
@@ -209,7 +201,6 @@
"| [Janeway](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Janeway) by Mr Seeker | Novel | Janeway is a model created from Picard's dataset combined with a brand new collection of ebooks. This model is trained on 20% more content than Picard and has been trained on literature from various genres. Although the model is mainly focussed on SFW, romantic scenes might involve a degree of nudity. |\n",
"| [Picard](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Picard) by Mr Seeker | Novel | Picard is a model trained for SFW Novels based on Neo 2.7B. It is focused on Novel style writing without the NSFW bias. While the name suggests a sci-fi model this model is designed for Novels of a variety of genre's. It is meant to be used in KoboldAI's regular mode. |\n",
"| [AID](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-AID) by melastacho | Adventure | Also know as Adventure 2.7B this is a clone of the AI Dungeon Classic model and is best known for the epic wackey adventures that AI Dungeon Classic players love. |\n",
"| [Pygmalion](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by PygmalionAI | Chatbot | Pygmalion is a chat model that has been based on a few models that came before it. First the model originates from LitV2, it was then trained by Haru on a chat dataset to create ConvoGPT. ConvoGPT was then trained by PygmalionAI on chat data that contains longer responses and emotions. Making for a higher quality chat experience than you can get from other models such as Erebus that are not directly trained on chatting. |\n",
"| [Horni LN](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni-LN) by finetune | Novel | This model is based on Horni 2.7B and retains its NSFW knowledge, but was then further biased towards SFW novel stories. If you seek a balance between a SFW Novel model and a NSFW model this model should be a good choice. |\n",
"| [Horni](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Horni) by finetune | NSFW | This model is tuned on Literotica to produce a Novel style model biased towards NSFW content. Can still be used for SFW stories but will have a bias towards NSFW content. It is meant to be used in KoboldAI's regular mode. |\n",
"| [Shinen](https://huggingface.co/KoboldAI/GPT-Neo-2.7B-Shinen) by Mr Seeker | NSFW | Shinen is an alternative to the Horni model designed to be more explicit. If Horni is to tame for you Shinen might produce better results. While it is a Novel model it is unsuitable for SFW stories due to its heavy NSFW bias. Shinen will not hold back. It is meant to be used in KoboldAI's regular mode. |\n",
@@ -217,23 +208,6 @@
"| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-2.7B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger models from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |\n",
"| [Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) by EleutherAI | Generic | This is the base model for all the other 2.7B models, it is best used when you have a use case that we have no other models available for, such as writing blog articles or programming. It can also be a good basis for the experience of some of the softprompts if your softprompt is not about a subject the other models cover. |\n",
"\n",
"# [TPU Edition Model Descriptions](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)\n",
"\n",
"| Model | Style | Description |\n",
"| --- | --- | --- |\n",
"| [Nerys](https://huggingface.co/KoboldAI/fairseq-dense-13B-Nerys) by Mr Seeker | Novel/Adventure | Nerys is a hybrid model based on Pike (A newer Janeway), on top of the Pike dataset you also get some Light Novels, Adventure mode support and a little bit of Shinen thrown in the mix. The end result is a very diverse model that is heavily biased towards SFW novel writing, but one that can go beyond its novel training and make for an excellent adventure model to. Adventure mode is best played from a second person perspective, but can be played in first or third person as well. Novel writing can be done best from the first or third person. |\n",
"| [Erebus](https://huggingface.co/KoboldAI/OPT-13B-Erebus) by Mr Seeker | NSFW | Erebus is our community's flagship NSFW model, being a combination of multiple large datasets that include Literotica, Shinen and erotic novels from Nerys and featuring thourough tagging support it covers the vast majority of erotic writing styles. This model is capable of replacing both the Lit and Shinen models in terms of content and style and has been well received as (one of) the best NSFW models out there. If you wish to use this model for commercial or non research usage we recommend choosing the 20B version as that one is not subject to the restrictive OPT license. |\n",
"| [Janeway](https://huggingface.co/KoboldAI/fairseq-dense-13B-Janeway) by Mr Seeker | Novel | Janeway is a model created from Picard's dataset combined with a brand new collection of ebooks. This model is trained on 20% more content than Picard and has been trained on literature from various genres. Although the model is mainly focussed on SFW, romantic scenes might involve a degree of nudity. |\n",
"| [Shinen](https://huggingface.co/KoboldAI/fairseq-dense-13B-Shinen) by Mr Seeker | NSFW | Shinen is an NSFW model trained on a variety of stories from the website Sexstories it contains many different kinks. It has been merged into the larger (and better) Erebus model. |\n",
"| [Skein](https://huggingface.co/KoboldAI/GPT-J-6B-Skein) by VE\\_FORBRYDERNE | Adventure | Skein is best used with Adventure mode enabled, it consists of a 4 times larger adventure dataset than the Adventure model making it excellent for text adventure gaming. On top of that it also consists of light novel training further expanding its knowledge and writing capabilities. It can be used with the You filter bias if you wish to write Novels with it, but dedicated Novel models can perform better for this task. |\n",
"| [Adventure](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by VE\\_FORBRYDERNE | Adventure | Adventure is a 6B model designed to mimick the behavior of AI Dungeon. It is exclusively for Adventure Mode and can take you on the epic and wackey adventures that AI Dungeon players love. It also features the many tropes of AI Dungeon as it has been trained on very similar data. It must be used in second person (You). |\n",
"| [Pygmalion](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by PygmalionAI | Chatbot | Pygmalion is a chat model that has been based on a few models that came before it. First the model originates from LitV2, it was then trained by Haru on a chat dataset to create ConvoGPT. ConvoGPT was then trained by PygmalionAI on chat data that contains longer responses and emotions. Making for a higher quality chat experience than you can get from other models such as Erebus that are not directly trained on chatting. |\n",
"| [Lit](https://huggingface.co/hakurei/lit-6B) ([V2](https://huggingface.co/hakurei/litv2-6B-rev3)) by Haru | NSFW | Lit is a great NSFW model trained by Haru on both a large set of Literotica stories and high quality novels along with tagging support. Creating a high quality model for your NSFW stories. This model is exclusively a novel model and is best used in third person. |\n",
"| [OPT](https://huggingface.co/facebook/opt-13b) by Metaseq | Generic | OPT is considered one of the best base models as far as content goes, its behavior has the strengths of both GPT-Neo and Fairseq Dense. Compared to Neo duplicate and unnecessary content has been left out, while additional literature was added in similar to the Fairseq Dense model. The Fairseq Dense model however lacks the broader data that OPT does have. The biggest downfall of OPT is its license, which prohibits any commercial usage, or usage beyond research purposes. |\n",
"| [Neo(X)](https://huggingface.co/EleutherAI/gpt-neox-20b) by EleutherAI | Generic | NeoX is the largest EleutherAI model currently available, being a generic model it is not particularly trained towards anything and can do a variety of writing, Q&A and coding tasks. 20B's performance is closely compared to the 13B models and it is worth trying both especially if you have a task that does not involve english writing. Its behavior will be similar to the GPT-J-6B model since they are trained on the same dataset but with more sensitivity towards repetition penalty and with more knowledge. |\n",
"| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-13B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger 20B model from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |\n",
"| [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) by EleutherAI | Generic | This model serves as the basis for most other 6B models (Some being based on Fairseq Dense instead). Being trained on the Pile and not biased towards anything in particular it is suitable for a variety of tasks such as writing, Q&A and coding tasks. You will likely get better result with larger generic models or finetuned models. |\n",
"\n",
"| Style | Description |\n",
"| --------- | ------------------------------------------------------------ |\n",
"| Novel | For regular story writing, not compatible with Adventure mode or other specialty modes. |\n",

View File

@@ -2,30 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "view-in-github",
"colab_type": "text"
},
"source": [
"<a href=\"https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
]
},
{
"cell_type": "markdown",
"source": [
"# GOOGLE HAS BROKEN SUPPORT FOR MESH TRANSFORMERS JAX IN THEIR DRIVER, THIS MESSAGE WILL BE REMOVED ONCE THE NOTEBOOK WORKS AGAIN\n",
"---\n",
"Are you a developer familair with Jax? We could use some help.\n",
"Our Mesh Transformers Jax fork resides here (but unfortunately VE-Forbryderne has gone missing) : https://github.com/VE-FORBRYDERNE/mesh-transformer-jax\n",
"\n",
"This is combined with the TPU backend code you can find here: https://github.com/KoboldAI/KoboldAI-Client/blob/main/tpu_mtj_backend.py\n",
"\n",
"So far we know the driver initialization issues can be resolved when a newer version of Jax is used combined with a slight edit to tpu_mtj_backend.py to use Jax's built in code for driver intialization (Which is not present in the older version we currently use, hence that part being in our file).\n",
"\n",
"As far as we understand the issue is that xmap was broken in newer versions, and MTJ makes use of it. If someone can port this part of the code to be compatible with the newer Jax versions you can save this notebook!\n",
"\n",
"(Or Google, if you are reading this. Please reintroduce the old 0.1 compatibility since you broke an entire ecosystem of Mesh Transformers Jax users, which has historic value because it is the original implementation of GPT-J. There also do not seem to be alternatives that have the same amount of performance and model compatibility).\n",
"\n",
"# Welcome to KoboldAI on Google Colab, TPU Edition!\n",
"KoboldAI is a powerful and easy way to use a variety of AI based text generation experiences. You can use it to write stories, blog posts, play a text adventure game, use it like a chatbot and more! In some cases it might even help you with an assignment or programming task (But always make sure the information the AI mentions is correct, it loves to make stuff up).\n",
"\n",
@@ -79,7 +56,7 @@
"#@title <b><-- Select your model below and then click this to start KoboldAI</b>\n",
"#@markdown You can find a description of the models below along with instructions on how to start KoboldAI.\n",
"\n",
"Model = \"Nerys 13B V2\" #@param [\"Nerys 13B V2\", \"Nerybus 13B\", \"Erebus 13B\", \"Janeway 13B\", \"Shinen 13B\", \"Skein 20B\", \"Erebus 20B\", \"Skein 6B\", \"Janeway 6B\", \"Adventure 6B\", \"Shinen 6B\", \"Pygmalion 6B\", \"Pygmalion 6B Dev\", \"Lit V2 6B\", \"Lit 6B\", \"NeoX 20B\", \"OPT 13B\", \"Fairseq Dense 13B\", \"GPT-J-6B\"] {allow-input: true}\n",
"Model = \"Nerys 13B V2\" #@param [\"Nerys 13B V2\", \"Nerybus 13B\", \"Erebus 13B\", \"Janeway 13B\", \"Shinen 13B\", \"Skein 20B\", \"Erebus 20B\", \"Skein 6B\", \"Janeway 6B\", \"Adventure 6B\", \"Shinen 6B\", \"Lit V2 6B\", \"Lit 6B\", \"NeoX 20B\", \"OPT 13B\", \"Fairseq Dense 13B\", \"GPT-J-6B\"] {allow-input: true}\n",
"Version = \"Official\" #@param [\"Official\", \"United\"] {allow-input: true}\n",
"Provider = \"Cloudflare\" #@param [\"Localtunnel\", \"Cloudflare\"]\n",
"use_google_drive = True #@param {type:\"boolean\"}\n",
@@ -148,17 +125,6 @@
" Model = \"KoboldAI/GPT-J-6B-Adventure\"\n",
" path = \"\"\n",
" download = \"\"\n",
"elif Model == \"Pygmalion 6B\":\n",
" Model = \"PygmalionAI/pygmalion-6b\"\n",
" path = \"\"\n",
" download = \"\"\n",
" Version = \"United\"\n",
"elif Model == \"Pygmalion 6B Dev\":\n",
" Model = \"PygmalionAI/pygmalion-6b\"\n",
" Revision = \"--revision dev\"\n",
" path = \"\"\n",
" Version = \"United\"\n",
" download = \"\"\n",
"elif Model == \"Lit V2 6B\":\n",
" Model = \"hakurei/litv2-6B-rev3\"\n",
" path = \"\"\n",
@@ -208,7 +174,6 @@
"| [Shinen](https://huggingface.co/KoboldAI/fairseq-dense-13B-Shinen) by Mr Seeker | NSFW | Shinen is an NSFW model trained on a variety of stories from the website Sexstories it contains many different kinks. It has been merged into the larger (and better) Erebus model. |\n",
"| [Skein](https://huggingface.co/KoboldAI/GPT-J-6B-Skein) by VE\\_FORBRYDERNE | Adventure | Skein is best used with Adventure mode enabled, it consists of a 4 times larger adventure dataset than the Adventure model making it excellent for text adventure gaming. On top of that it also consists of light novel training further expanding its knowledge and writing capabilities. It can be used with the You filter bias if you wish to write Novels with it, but dedicated Novel models can perform better for this task. |\n",
"| [Adventure](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by VE\\_FORBRYDERNE | Adventure | Adventure is a 6B model designed to mimick the behavior of AI Dungeon. It is exclusively for Adventure Mode and can take you on the epic and wackey adventures that AI Dungeon players love. It also features the many tropes of AI Dungeon as it has been trained on very similar data. It must be used in second person (You). |\n",
"| [Pygmalion](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by PygmalionAI | Chatbot | Pygmalion is a chat model that has been based on a few models that came before it. First the model originates from LitV2, it was then trained by Haru on a chat dataset to create ConvoGPT. ConvoGPT was then trained by PygmalionAI on chat data that contains longer responses and emotions. Making for a higher quality chat experience than you can get from other models such as Erebus that are not directly trained on chatting. |\n",
"| [Lit](https://huggingface.co/hakurei/lit-6B) ([V2](https://huggingface.co/hakurei/litv2-6B-rev3)) by Haru | NSFW | Lit is a great NSFW model trained by Haru on both a large set of Literotica stories and high quality novels along with tagging support. Creating a high quality model for your NSFW stories. This model is exclusively a novel model and is best used in third person. |\n",
"| [OPT](https://huggingface.co/facebook/opt-13b) by Metaseq | Generic | OPT is considered one of the best base models as far as content goes, its behavior has the strengths of both GPT-Neo and Fairseq Dense. Compared to Neo duplicate and unnecessary content has been left out, while additional literature was added in similar to the Fairseq Dense model. The Fairseq Dense model however lacks the broader data that OPT does have. The biggest downfall of OPT is its license, which prohibits any commercial usage, or usage beyond research purposes. |\n",
"| [Neo(X)](https://huggingface.co/EleutherAI/gpt-neox-20b) by EleutherAI | Generic | NeoX is the largest EleutherAI model currently available, being a generic model it is not particularly trained towards anything and can do a variety of writing, Q&A and coding tasks. 20B's performance is closely compared to the 13B models and it is worth trying both especially if you have a task that does not involve english writing. Its behavior will be similar to the GPT-J-6B model since they are trained on the same dataset but with more sensitivity towards repetition penalty and with more knowledge. |\n",
@@ -270,8 +235,7 @@
"colab": {
"name": "ColabKobold TPU",
"provenance": [],
"private_outputs": true,
"include_colab_link": true
"private_outputs": true
},
"kernelspec": {
"display_name": "Python 3",

View File

@@ -30,10 +30,10 @@ dependencies:
- flask-ngrok
- flask-cors
- lupa==1.10
- transformers==4.25.1
- transformers==4.28.0
- huggingface_hub==0.12.1
- safetensors
- accelerate
- accelerate==0.18.0
- git+https://github.com/VE-FORBRYDERNE/mkultra
- flask-session
- python-socketio[client]

View File

@@ -29,10 +29,10 @@ dependencies:
- flask-ngrok
- flask-cors
- lupa==1.10
- transformers==4.25.1
- transformers==4.28.0
- huggingface_hub==0.12.1
- safetensors
- accelerate
- accelerate==0.18.0
- git+https://github.com/VE-FORBRYDERNE/mkultra
- ansi2html
- flask_compress

View File

@@ -816,6 +816,40 @@ gensettingstf = [
"ui_level": 2
},
{
"UI_V2_Only": True,
"uitype": "text",
"unit": "text",
"label": "Horde Worker Name",
"id": "horde_worker_name",
"min": 0,
"max": 1,
"step": 1,
"default": 0,
"tooltip": "This is the name of your worker that shows up on the Horde",
"menu_path": "Settings",
"sub_path": "Other",
"classname": "user",
"name": "horde_worker_name",
"ui_level": 2
},
{
"UI_V2_Only": True,
"uitype": "password",
"unit": "text",
"label": "Horde API Key",
"id": "horde_api_key",
"min": 0,
"max": 1,
"step": 1,
"default": 0,
"tooltip": "You can paste your API key here for faster generations and to earn Kudo's as a worker",
"menu_path": "Settings",
"sub_path": "Other",
"classname": "user",
"name": "horde_api_key",
"ui_level": 2
},
{
"UI_V2_Only": True,
"uitype": "toggle",
"unit": "bool",

View File

@@ -1,6 +1,10 @@
@echo off
cd /D %~dp0
:Isolation
SET CONDA_SHLVL=
SET PYTHONNOUSERSITE=1
SET PYTHONPATH=
TITLE KoboldAI - Git Transformers Installer
ECHO This script will replace the Transformers version with the latest Git Transformers which may contain breaking changes.

View File

@@ -8,7 +8,11 @@ echo.
Reg add "HKLM\SYSTEM\CurrentControlSet\Control\FileSystem" /v "LongPathsEnabled" /t REG_DWORD /d "1" /f 2>nul
cd /D %~dp0
:Isolation
SET CONDA_SHLVL=
SET PYTHONNOUSERSITE=1
SET PYTHONPATH=
if exist miniconda3\ (
echo Delete existing installation?

View File

@@ -1,13 +1,13 @@
#!/bin/bash
git submodule update --init --recursive
if [[ $1 = "cuda" ]]; then
if [[ $1 = "cuda" || $1 = "CUDA" ]]; then
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
exit
fi
if [[ $1 = "rocm" ]]; then
if [[ $1 = "rocm" || $1 = "ROCM" ]]; then
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
bin/micromamba create -f environments/rocm.yml -r runtime -n koboldai-rocm -y
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster

View File

@@ -19,6 +19,8 @@ import inspect
serverstarted = False
queue = None
multi_story = False
global enable_whitelist
enable_whitelist = False
if importlib.util.find_spec("tortoise") is not None:
from tortoise import api
@@ -654,6 +656,7 @@ class model_settings(settings):
default_settings = {"rep_pen" : 1.1, "rep_pen_slope": 0.7, "rep_pen_range": 1024, "temp": 0.5, "top_p": 0.9, "top_k": 0, "top_a": 0.0, "tfs": 1.0, "typical": 1.0,
"sampler_order": [6,0,1,2,3,4,5]}
def __init__(self, socketio, koboldai_vars):
self.enable_whitelist = False
self._socketio = socketio
self.reset_for_model_load()
self.model = "" # Model ID string chosen at startup
@@ -1200,14 +1203,14 @@ class undefined_settings(settings):
class system_settings(settings):
local_only_variables = ['lua_state', 'lua_logname', 'lua_koboldbridge', 'lua_kobold',
'lua_koboldcore', 'regex_sl', 'acregex_ai', 'acregex_ui', 'comregex_ai',
'comregex_ui', 'sp', '_horde_pid', 'inference_config', 'image_pipeline',
'summarizer', 'summary_tokenizer', 'tts_model', 'rng_states']
'lua_koboldcore', 'regex_sl', 'acregex_ai', 'acregex_ui', 'comregex_ai', 'comregex_ui',
'sp', '_horde_pid', 'inference_config', 'image_pipeline',
'summarizer', 'summary_tokenizer', 'tts_model', 'rng_states', 'comregex_ai', 'comregex_ui']
no_save_variables = ['lua_state', 'lua_logname', 'lua_koboldbridge', 'lua_kobold',
'lua_koboldcore', 'sp', 'sp_length', '_horde_pid', 'horde_share', 'aibusy',
'serverstarted', 'inference_config', 'image_pipeline', 'summarizer',
'summary_tokenizer', 'use_colab_tpu', 'noai', 'disable_set_aibusy', 'cloudflare_link', 'tts_model',
'generating_image', 'bit_8_available', 'host', 'hascuda', 'usegpu', 'rng_states']
'generating_image', 'bit_8_available', 'host', 'hascuda', 'usegpu', 'rng_states', 'comregex_ai', 'comregex_ui']
settings_name = "system"
def __init__(self, socketio, koboldai_var):
self._socketio = socketio
@@ -1249,8 +1252,8 @@ class system_settings(settings):
self.regex_sl = re.compile(r'\n*(?<=.) *\n(.|\n)*') # Pattern for limiting the output to a single line
self.acregex_ai = re.compile(r'\n* *>(.|\n)*') # Pattern for matching adventure actions from the AI so we can remove them
self.acregex_ui = re.compile(r'^ *(&gt;.*)$', re.MULTILINE) # Pattern for matching actions in the HTML-escaped story so we can apply colouring, etc (make sure to encase part to format in parentheses)
self.comregex_ai = re.compile(r'(?:\n<\|(?:.|\n)*?\|>(?=\n|$))|(?:<\|(?:.|\n)*?\|>\n?)') # Pattern for matching comments to remove them before sending them to the AI
self.comregex_ui = re.compile(r'(&lt;\|(?:.|\n)*?\|&gt;)') # Pattern for matching comments in the editor
self.comregex_ai = re.compile(r'(?:\n\[<\|(?:.|\n)*?\|>\](?=\n|$))|(?:\[<\|(?:.|\n)*?\|>\]\n?)') # Pattern for matching comments to remove them before sending them to the AI
self.comregex_ui = re.compile(r'(\[&lt;\|(?:.|\n)*?\|&gt;\])') # Pattern for matching comments in the editor
self.host = False
self.flaskwebgui = False
self.quiet = False # If set will suppress any story text from being printed to the console (will only be seen on the client web page)
@@ -2745,4 +2748,4 @@ default_preset = {
]
}
badwordsids_default = [[6880], [50256], [42496], [4613], [17414], [22039], [16410], [27], [29], [38430], [37922], [15913], [24618], [28725], [58], [47175], [36937], [26700], [12878], [16471], [37981], [5218], [29795], [13412], [45160], [3693], [49778], [4211], [20598], [36475], [33409], [44167], [32406], [29847], [29342], [42669], [685], [25787], [7359], [3784], [5320], [33994], [33490], [34516], [43734], [17635], [24293], [9959], [23785], [21737], [28401], [18161], [26358], [32509], [1279], [38155], [18189], [26894], [6927], [14610], [23834], [11037], [14631], [26933], [46904], [22330], [25915], [47934], [38214], [1875], [14692], [41832], [13163], [25970], [29565], [44926], [19841], [37250], [49029], [9609], [44438], [16791], [17816], [30109], [41888], [47527], [42924], [23984], [49074], [33717], [31161], [49082], [30138], [31175], [12240], [14804], [7131], [26076], [33250], [3556], [38381], [36338], [32756], [46581], [17912], [49146]] # Tokenized array of badwords used to prevent AI artifacting
badwordsids_neox = [[0], [1], [44162], [9502], [12520], [31841], [36320], [49824], [34417], [6038], [34494], [24815], [26635], [24345], [3455], [28905], [44270], [17278], [32666], [46880], [7086], [43189], [37322], [17778], [20879], [49821], [3138], [14490], [4681], [21391], [26786], [43134], [9336], [683], [48074], [41256], [19181], [29650], [28532], [36487], [45114], [46275], [16445], [15104], [11337], [1168], [5647], [29], [27482], [44965], [43782], [31011], [42944], [47389], [6334], [17548], [38329], [32044], [35487], [2239], [34761], [7444], [1084], [12399], [18990], [17636], [39083], [1184], [35830], [28365], [16731], [43467], [47744], [1138], [16079], [40116], [45564], [18297], [42368], [5456], [18022], [42696], [34476], [23505], [23741], [39334], [37944], [45382], [38709], [33440], [26077], [43600], [34418], [36033], [6660], [48167], [48471], [15775], [19884], [41533], [1008], [31053], [36692], [46576], [20095], [20629], [31759], [46410], [41000], [13488], [30952], [39258], [16160], [27655], [22367], [42767], [43736], [49694], [13811], [12004], [46768], [6257], [37471], [5264], [44153], [33805], [20977], [21083], [25416], [14277], [31096], [42041], [18331], [33376], [22372], [46294], [28379], [38475], [1656], [5204], [27075], [50001], [16616], [11396], [7748], [48744], [35402], [28120], [41512], [4207], [43144], [14767], [15640], [16595], [41305], [44479], [38958], [18474], [22734], [30522], [46267], [60], [13976], [31830], [48701], [39822], [9014], [21966], [31422], [28052], [34607], [2479], [3851], [32214], [44082], [45507], [3001], [34368], [34758], [13380], [38363], [4299], [46802], [30996], [12630], [49236], [7082], [8795], [5218], [44740], [9686], [9983], [45301], [27114], [40125], [1570], [26997], [544], [5290], [49193], [23781], [14193], [40000], [2947], [43781], [9102], [48064], [42274], [18772], [49384], [9884], [45635], [43521], [31258], [32056], [47686], [21760], [13143], [10148], [26119], [44308], [31379], [36399], [23983], [46694], [36134], [8562], [12977], [35117], [28591], [49021], [47093], [28653], [29013], [46468], [8605], [7254], [25896], [5032], [8168], [36893], [38270], [20499], [27501], [34419], [29547], [28571], [36586], [20871], [30537], [26842], [21375], [31148], [27618], [33094], [3291], [31789], [28391], [870], [9793], [41361], [47916], [27468], [43856], [8850], [35237], [15707], [47552], [2730], [41449], [45488], [3073], [49806], [21938], [24430], [22747], [20924], [46145], [20481], [20197], [8239], [28231], [17987], [42804], [47269], [29972], [49884], [21382], [46295], [36676], [34616], [3921], [26991], [27720], [46265], [654], [9855], [40354], [5291], [34904], [44342], [2470], [14598], [880], [19282], [2498], [24237], [21431], [16369], [8994], [44524], [45662], [13663], [37077], [1447], [37786], [30863], [42854], [1019], [20322], [4398], [12159], [44072], [48664], [31547], [18736], [9259], [31], [16354], [21810], [4357], [37982], [5064], [2033], [32871], [47446], [62], [22158], [37387], [8743], [47007], [17981], [11049], [4622], [37916], [36786], [35138], [29925], [14157], [18095], [27829], [1181], [22226], [5709], [4725], [30189], [37014], [1254], [11380], [42989], [696], [24576], [39487], [30119], [1092], [8088], [2194], [9899], [14412], [21828], [3725], [13544], [5180], [44679], [34398], [3891], [28739], [14219], [37594], [49550], [11326], [6904], [17266], [5749], [10174], [23405], [9955], [38271], [41018], [13011], [48392], [36784], [24254], [21687], [23734], [5413], [41447], [45472], [10122], [17555], [15830], [47384], [12084], [31350], [47940], [11661], [27988], [45443], [905], [49651], [16614], [34993], [6781], [30803], [35869], [8001], [41604], [28118], [46462], [46762], [16262], [17281], [5774], [10943], [5013], [18257], [6750], [4713], [3951], [11899], [38791], [16943], [37596], [9318], [18413], [40473], [13208], [16375]]
badwordsids_neox = [[0], [1], [44162], [9502], [12520], [31841], [36320], [49824], [34417], [6038], [34494], [24815], [26635], [24345], [3455], [28905], [44270], [17278], [32666], [46880], [7086], [43189], [37322], [17778], [20879], [49821], [3138], [14490], [4681], [21391], [26786], [43134], [9336], [683], [48074], [41256], [19181], [29650], [28532], [36487], [45114], [46275], [16445], [15104], [11337], [1168], [5647], [29], [27482], [44965], [43782], [31011], [42944], [47389], [6334], [17548], [38329], [32044], [35487], [2239], [34761], [7444], [1084], [12399], [18990], [17636], [39083], [1184], [35830], [28365], [16731], [43467], [47744], [1138], [16079], [40116], [45564], [18297], [42368], [5456], [18022], [42696], [34476], [23505], [23741], [39334], [37944], [45382], [38709], [33440], [26077], [43600], [34418], [36033], [6660], [48167], [48471], [15775], [19884], [41533], [1008], [31053], [36692], [46576], [20095], [20629], [31759], [46410], [41000], [13488], [30952], [39258], [16160], [27655], [22367], [42767], [43736], [49694], [13811], [12004], [46768], [6257], [37471], [5264], [44153], [33805], [20977], [21083], [25416], [14277], [31096], [42041], [18331], [33376], [22372], [46294], [28379], [38475], [1656], [5204], [27075], [50001], [16616], [11396], [7748], [48744], [35402], [28120], [41512], [4207], [43144], [14767], [15640], [16595], [41305], [44479], [38958], [18474], [22734], [30522], [46267], [60], [13976], [31830], [48701], [39822], [9014], [21966], [31422], [28052], [34607], [2479], [3851], [32214], [44082], [45507], [3001], [34368], [34758], [13380], [38363], [4299], [46802], [30996], [12630], [49236], [7082], [8795], [5218], [44740], [9686], [9983], [45301], [27114], [40125], [1570], [26997], [544], [5290], [49193], [23781], [14193], [40000], [2947], [43781], [9102], [48064], [42274], [18772], [49384], [9884], [45635], [43521], [31258], [32056], [47686], [21760], [13143], [10148], [26119], [44308], [31379], [36399], [23983], [46694], [36134], [8562], [12977], [35117], [28591], [49021], [47093], [28653], [29013], [46468], [8605], [7254], [25896], [5032], [8168], [36893], [38270], [20499], [27501], [34419], [29547], [28571], [36586], [20871], [30537], [26842], [21375], [31148], [27618], [33094], [3291], [31789], [28391], [870], [9793], [41361], [47916], [27468], [43856], [8850], [35237], [15707], [47552], [2730], [41449], [45488], [3073], [49806], [21938], [24430], [22747], [20924], [46145], [20481], [20197], [8239], [28231], [17987], [42804], [47269], [29972], [49884], [21382], [46295], [36676], [34616], [3921], [26991], [27720], [46265], [654], [9855], [40354], [5291], [34904], [44342], [2470], [14598], [880], [19282], [2498], [24237], [21431], [16369], [8994], [44524], [45662], [13663], [37077], [1447], [37786], [30863], [42854], [1019], [20322], [4398], [12159], [44072], [48664], [31547], [18736], [9259], [31], [16354], [21810], [4357], [37982], [5064], [2033], [32871], [47446], [62], [22158], [37387], [8743], [47007], [17981], [11049], [4622], [37916], [36786], [35138], [29925], [14157], [18095], [27829], [1181], [22226], [5709], [4725], [30189], [37014], [1254], [11380], [42989], [696], [24576], [39487], [30119], [1092], [8088], [2194], [9899], [14412], [21828], [3725], [13544], [5180], [44679], [34398], [3891], [28739], [14219], [37594], [49550], [11326], [6904], [17266], [5749], [10174], [23405], [9955], [38271], [41018], [13011], [48392], [36784], [24254], [21687], [23734], [5413], [41447], [45472], [10122], [17555], [15830], [47384], [12084], [31350], [47940], [11661], [27988], [45443], [905], [49651], [16614], [34993], [6781], [30803], [35869], [8001], [41604], [28118], [46462], [46762], [16262], [17281], [5774], [10943], [5013], [18257], [6750], [4713], [3951], [11899], [38791], [16943], [37596], [9318], [18413], [40473], [13208], [16375]]

View File

@@ -1,4 +1,4 @@
transformers==4.25.1
transformers==4.28.0
huggingface_hub==0.12.1
Flask==2.2.3
Flask-SocketIO==5.3.2
@@ -15,7 +15,7 @@ markdown
bleach==4.1.0
sentencepiece
protobuf
accelerate
accelerate==0.18.0
flask-session==0.4.0
marshmallow>=3.13
apispec-webframeworks

View File

@@ -2,10 +2,11 @@ torch >= 1.9, < 1.13
numpy
tqdm
requests
dm-haiku == 0.0.5
jax == 0.2.21
jaxlib >= 0.1.69, <= 0.3.7
transformers == 4.25.1
dm-haiku==0.0.9
jax==0.3.25
jaxlib==0.3.25
transformers == 4.28.0
chex == 0.1.5
huggingface_hub==0.12.1
progressbar2
git+https://github.com/VE-FORBRYDERNE/mesh-transformer-jax@ck

View File

@@ -1095,7 +1095,7 @@ def read_neox_checkpoint(state, path, config, checkpoint_shards=2):
koboldai_vars.status_message = ""
def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpoint=False, socketio_queue=None, initial_load=False, logger=None, **kwargs) -> None:
def load_model(path: str, driver_version="tpu_driver_20221109", hf_checkpoint=False, socketio_queue=None, initial_load=False, logger=None, **kwargs) -> None:
global thread_resources_env, seq, tokenizer, network, params, pad_token_id
if "pad_token_id" in kwargs:
@@ -1270,11 +1270,6 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
logger.message(f"KoboldAI has finished loading and is available at the following link for UI 1: {koboldai_vars.cloudflare_link}")
logger.message(f"KoboldAI has finished loading and is available at the following link for UI 2: {koboldai_vars.cloudflare_link}/new_ui")
global shard_xmap, batch_xmap
shard_xmap = __shard_xmap()
batch_xmap = __batch_xmap(shard_dim=cores_per_replica)
global badwords
# These are the tokens that we don't want the AI to ever write
badwords = jnp.array(koboldai_vars.badwordsids).squeeze()
@@ -1401,19 +1396,20 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
#if "no_transpose" not in transforms and tensor.ndim == 2:
# tensor = tensor.T
tensor.unsqueeze_(0)
if tensor.dtype is torch.float16 or tensor.dtype is torch.float32:
tensor = tensor.bfloat16()
# Shard the tensor so that parts of the tensor can be used
# on different TPU cores
tensor = reshard_reverse(
tensor,
params["cores_per_replica"],
network.state["params"][spec["module"]][spec["param"]].shape,
)
tensor = jnp.array(tensor.detach())
if tensor.dtype is torch.float16 or tensor.dtype is torch.float32:
tensor = tensor.bfloat16()
network.state["params"][spec["module"]][spec["param"]] = move_xmap(
jax.dlpack.from_dlpack(torch.utils.dlpack.to_dlpack(
reshard_reverse(
tensor,
params["cores_per_replica"],
network.state["params"][spec["module"]][spec["param"]].shape,
)
)).copy(),
tensor,
np.empty(params["cores_per_replica"]),
)
@@ -1506,3 +1502,6 @@ def load_model(path: str, driver_version="tpu_driver0.1_dev20210607", hf_checkpo
model = GPTNeoForCausalLM.from_pretrained(koboldai_vars.model, revision=koboldai_vars.revision, cache_dir="cache")
#network.state = network.move_xmap(network.state, np.zeros(cores_per_replica))
global shard_xmap, batch_xmap
shard_xmap = __shard_xmap()
batch_xmap = __batch_xmap(shard_dim=cores_per_replica)