mirror of
https://github.com/KoboldAI/KoboldAI-Client.git
synced 2025-06-05 21:59:24 +02:00
Merge branch 'main' into united
This commit is contained in:
@@ -39,8 +39,6 @@
|
|||||||
"\n",
|
"\n",
|
||||||
"For more information about KoboldAI check our our Github readme : https://github.com/KoboldAI/KoboldAI-Client/blob/main/readme.md\n",
|
"For more information about KoboldAI check our our Github readme : https://github.com/KoboldAI/KoboldAI-Client/blob/main/readme.md\n",
|
||||||
"\n",
|
"\n",
|
||||||
"For the larger AI models (That are typically more coherent) check out our **[TPU edition](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)**!\n",
|
|
||||||
"\n",
|
|
||||||
"---\n",
|
"---\n",
|
||||||
"## How to load KoboldAI: Everything you need to know\n",
|
"## How to load KoboldAI: Everything you need to know\n",
|
||||||
"1. On a phone? First put your browser in desktop mode because of a Google Colab bug. Otherwise nothing will happen when you click the play button. Then tap the play button next to \"<-- Tap This if you play on Mobile\", you will see an audio player. Keep the audio player playing so Colab does not get shut down in the background.\n",
|
"1. On a phone? First put your browser in desktop mode because of a Google Colab bug. Otherwise nothing will happen when you click the play button. Then tap the play button next to \"<-- Tap This if you play on Mobile\", you will see an audio player. Keep the audio player playing so Colab does not get shut down in the background.\n",
|
||||||
@@ -210,22 +208,6 @@
|
|||||||
"| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-2.7B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger models from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |\n",
|
"| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-2.7B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger models from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |\n",
|
||||||
"| [Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) by EleutherAI | Generic | This is the base model for all the other 2.7B models, it is best used when you have a use case that we have no other models available for, such as writing blog articles or programming. It can also be a good basis for the experience of some of the softprompts if your softprompt is not about a subject the other models cover. |\n",
|
"| [Neo](https://huggingface.co/EleutherAI/gpt-neo-2.7B) by EleutherAI | Generic | This is the base model for all the other 2.7B models, it is best used when you have a use case that we have no other models available for, such as writing blog articles or programming. It can also be a good basis for the experience of some of the softprompts if your softprompt is not about a subject the other models cover. |\n",
|
||||||
"\n",
|
"\n",
|
||||||
"# [TPU Edition Model Descriptions](https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb)\n",
|
|
||||||
"\n",
|
|
||||||
"| Model | Style | Description |\n",
|
|
||||||
"| --- | --- | --- |\n",
|
|
||||||
"| [Nerys](https://huggingface.co/KoboldAI/fairseq-dense-13B-Nerys) by Mr Seeker | Novel/Adventure | Nerys is a hybrid model based on Pike (A newer Janeway), on top of the Pike dataset you also get some Light Novels, Adventure mode support and a little bit of Shinen thrown in the mix. The end result is a very diverse model that is heavily biased towards SFW novel writing, but one that can go beyond its novel training and make for an excellent adventure model to. Adventure mode is best played from a second person perspective, but can be played in first or third person as well. Novel writing can be done best from the first or third person. |\n",
|
|
||||||
"| [Erebus](https://huggingface.co/KoboldAI/OPT-13B-Erebus) by Mr Seeker | NSFW | Erebus is our community's flagship NSFW model, being a combination of multiple large datasets that include Literotica, Shinen and erotic novels from Nerys and featuring thourough tagging support it covers the vast majority of erotic writing styles. This model is capable of replacing both the Lit and Shinen models in terms of content and style and has been well received as (one of) the best NSFW models out there. If you wish to use this model for commercial or non research usage we recommend choosing the 20B version as that one is not subject to the restrictive OPT license. |\n",
|
|
||||||
"| [Janeway](https://huggingface.co/KoboldAI/fairseq-dense-13B-Janeway) by Mr Seeker | Novel | Janeway is a model created from Picard's dataset combined with a brand new collection of ebooks. This model is trained on 20% more content than Picard and has been trained on literature from various genres. Although the model is mainly focussed on SFW, romantic scenes might involve a degree of nudity. |\n",
|
|
||||||
"| [Shinen](https://huggingface.co/KoboldAI/fairseq-dense-13B-Shinen) by Mr Seeker | NSFW | Shinen is an NSFW model trained on a variety of stories from the website Sexstories it contains many different kinks. It has been merged into the larger (and better) Erebus model. |\n",
|
|
||||||
"| [Skein](https://huggingface.co/KoboldAI/GPT-J-6B-Skein) by VE\\_FORBRYDERNE | Adventure | Skein is best used with Adventure mode enabled, it consists of a 4 times larger adventure dataset than the Adventure model making it excellent for text adventure gaming. On top of that it also consists of light novel training further expanding its knowledge and writing capabilities. It can be used with the You filter bias if you wish to write Novels with it, but dedicated Novel models can perform better for this task. |\n",
|
|
||||||
"| [Adventure](https://huggingface.co/KoboldAI/GPT-J-6B-Adventure) by VE\\_FORBRYDERNE | Adventure | Adventure is a 6B model designed to mimick the behavior of AI Dungeon. It is exclusively for Adventure Mode and can take you on the epic and wackey adventures that AI Dungeon players love. It also features the many tropes of AI Dungeon as it has been trained on very similar data. It must be used in second person (You). |\n",
|
|
||||||
"| [Lit](https://huggingface.co/hakurei/lit-6B) by Haru | NSFW | Lit is a great NSFW model trained by Haru on both a large set of Literotica stories and high quality novels along with tagging support. Creating a high quality model for your NSFW stories. This model is exclusively a novel model and is best used in third person. |\n",
|
|
||||||
"| [OPT](https://huggingface.co/facebook/opt-13b) by Metaseq | Generic | OPT is considered one of the best base models as far as content goes, its behavior has the strengths of both GPT-Neo and Fairseq Dense. Compared to Neo duplicate and unnecessary content has been left out, while additional literature was added in similar to the Fairseq Dense model. The Fairseq Dense model however lacks the broader data that OPT does have. The biggest downfall of OPT is its license, which prohibits any commercial usage, or usage beyond research purposes. |\n",
|
|
||||||
"| [Neo(X)](https://huggingface.co/EleutherAI/gpt-neox-20b) by EleutherAI | Generic | NeoX is the largest EleutherAI model currently available, being a generic model it is not particularly trained towards anything and can do a variety of writing, Q&A and coding tasks. 20B's performance is closely compared to the 13B models and it is worth trying both especially if you have a task that does not involve english writing. Its behavior will be similar to the GPT-J-6B model since they are trained on the same dataset but with more sensitivity towards repetition penalty and with more knowledge. |\n",
|
|
||||||
"| [Fairseq Dense](https://huggingface.co/KoboldAI/fairseq-dense-13B) | Generic | Trained by Facebook Researchers this model stems from the MOE research project within Fairseq. This particular version has been converted by us for use in KoboldAI. It is known to be on par with the larger 20B model from EleutherAI and considered as better for pop culture and language tasks. Because the model has never seen a new line (enter) it may perform worse on formatting and paragraphing. Compared to other models the dataset focuses primarily on literature and contains little else. |\n",
|
|
||||||
"| [GPT-J-6B](https://huggingface.co/EleutherAI/gpt-j-6B) by EleutherAI | Generic | This model serves as the basis for most other 6B models (Some being based on Fairseq Dense instead). Being trained on the Pile and not biased towards anything in particular it is suitable for a variety of tasks such as writing, Q&A and coding tasks. You will likely get better result with larger generic models or finetuned models. |\n",
|
|
||||||
"\n",
|
|
||||||
"| Style | Description |\n",
|
"| Style | Description |\n",
|
||||||
"| --------- | ------------------------------------------------------------ |\n",
|
"| --------- | ------------------------------------------------------------ |\n",
|
||||||
"| Novel | For regular story writing, not compatible with Adventure mode or other specialty modes. |\n",
|
"| Novel | For regular story writing, not compatible with Adventure mode or other specialty modes. |\n",
|
||||||
|
@@ -2,30 +2,7 @@
|
|||||||
"cells": [
|
"cells": [
|
||||||
{
|
{
|
||||||
"cell_type": "markdown",
|
"cell_type": "markdown",
|
||||||
"metadata": {
|
|
||||||
"id": "view-in-github",
|
|
||||||
"colab_type": "text"
|
|
||||||
},
|
|
||||||
"source": [
|
"source": [
|
||||||
"<a href=\"https://colab.research.google.com/github/KoboldAI/KoboldAI-Client/blob/main/colab/TPU.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
|
|
||||||
]
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"cell_type": "markdown",
|
|
||||||
"source": [
|
|
||||||
"# GOOGLE HAS BROKEN SUPPORT FOR MESH TRANSFORMERS JAX IN THEIR DRIVER, THIS MESSAGE WILL BE REMOVED ONCE THE NOTEBOOK WORKS AGAIN\n",
|
|
||||||
"---\n",
|
|
||||||
"Are you a developer familair with Jax? We could use some help.\n",
|
|
||||||
"Our Mesh Transformers Jax fork resides here (but unfortunately VE-Forbryderne has gone missing) : https://github.com/VE-FORBRYDERNE/mesh-transformer-jax\n",
|
|
||||||
"\n",
|
|
||||||
"This is combined with the TPU backend code you can find here: https://github.com/KoboldAI/KoboldAI-Client/blob/main/tpu_mtj_backend.py\n",
|
|
||||||
"\n",
|
|
||||||
"So far we know the driver initialization issues can be resolved when a newer version of Jax is used combined with a slight edit to tpu_mtj_backend.py to use Jax's built in code for driver intialization (Which is not present in the older version we currently use, hence that part being in our file).\n",
|
|
||||||
"\n",
|
|
||||||
"As far as we understand the issue is that xmap was broken in newer versions, and MTJ makes use of it. If someone can port this part of the code to be compatible with the newer Jax versions you can save this notebook!\n",
|
|
||||||
"\n",
|
|
||||||
"(Or Google, if you are reading this. Please reintroduce the old 0.1 compatibility since you broke an entire ecosystem of Mesh Transformers Jax users, which has historic value because it is the original implementation of GPT-J. There also do not seem to be alternatives that have the same amount of performance and model compatibility).\n",
|
|
||||||
"\n",
|
|
||||||
"# Welcome to KoboldAI on Google Colab, TPU Edition!\n",
|
"# Welcome to KoboldAI on Google Colab, TPU Edition!\n",
|
||||||
"KoboldAI is a powerful and easy way to use a variety of AI based text generation experiences. You can use it to write stories, blog posts, play a text adventure game, use it like a chatbot and more! In some cases it might even help you with an assignment or programming task (But always make sure the information the AI mentions is correct, it loves to make stuff up).\n",
|
"KoboldAI is a powerful and easy way to use a variety of AI based text generation experiences. You can use it to write stories, blog posts, play a text adventure game, use it like a chatbot and more! In some cases it might even help you with an assignment or programming task (But always make sure the information the AI mentions is correct, it loves to make stuff up).\n",
|
||||||
"\n",
|
"\n",
|
||||||
@@ -258,8 +235,7 @@
|
|||||||
"colab": {
|
"colab": {
|
||||||
"name": "ColabKobold TPU",
|
"name": "ColabKobold TPU",
|
||||||
"provenance": [],
|
"provenance": [],
|
||||||
"private_outputs": true,
|
"private_outputs": true
|
||||||
"include_colab_link": true
|
|
||||||
},
|
},
|
||||||
"kernelspec": {
|
"kernelspec": {
|
||||||
"display_name": "Python 3",
|
"display_name": "Python 3",
|
||||||
|
@@ -1,13 +1,13 @@
|
|||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
git submodule update --init --recursive
|
git submodule update --init --recursive
|
||||||
if [[ $1 = "cuda" ]]; then
|
if [[ $1 = "cuda" || $1 = "CUDA" ]]; then
|
||||||
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
|
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
|
||||||
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
|
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
|
||||||
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster
|
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster
|
||||||
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
|
bin/micromamba create -f environments/huggingface.yml -r runtime -n koboldai -y
|
||||||
exit
|
exit
|
||||||
fi
|
fi
|
||||||
if [[ $1 = "rocm" ]]; then
|
if [[ $1 = "rocm" || $1 = "ROCM" ]]; then
|
||||||
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
|
wget -qO- https://micromamba.snakepit.net/api/micromamba/linux-64/latest | tar -xvj bin/micromamba
|
||||||
bin/micromamba create -f environments/rocm.yml -r runtime -n koboldai-rocm -y
|
bin/micromamba create -f environments/rocm.yml -r runtime -n koboldai-rocm -y
|
||||||
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster
|
# Weird micromamba bug causes it to fail the first time, running it twice just to be safe, the second time is much faster
|
||||||
|
Reference in New Issue
Block a user