diff --git a/aiserver.py b/aiserver.py index daa9b21d..ccc765e1 100644 --- a/aiserver.py +++ b/aiserver.py @@ -46,11 +46,11 @@ class colors: modellist = [ ["Custom Neo (GPT-Neo / Converted GPT-J)", "NeoCustom", ""], ["Custom GPT-2 (eg CloverEdition)", "GPT2Custom", ""], - ["GPT Neo 1.3B", "EleutherAI/gpt-neo-1.3B", "4GB"], - ["GPT Neo 2.7B", "EleutherAI/gpt-neo-2.7B", "8GB"], - ["GPT-2", "gpt2", "600MB"], - ["GPT-2 Med", "gpt2-medium", "1GB"], - ["GPT-2 Large", "gpt2-large", "8GB"], + ["GPT Neo 1.3B", "EleutherAI/gpt-neo-1.3B", "8GB"], + ["GPT Neo 2.7B", "EleutherAI/gpt-neo-2.7B", "16GB"], + ["GPT-2", "gpt2", "1GB"], + ["GPT-2 Med", "gpt2-medium", "2GB"], + ["GPT-2 Large", "gpt2-large", "4GB"], ["GPT-2 XL", "gpt2-xl", "8GB"], ["InferKit API (requires API key)", "InferKit", ""], ["Google Colab", "Colab", ""], diff --git a/readme.md b/readme.md index 5f19c123..cfbdd7b8 100644 --- a/readme.md +++ b/readme.md @@ -124,11 +124,11 @@ The models listed in the KoboldAI menu are generic models meant to easily get yo | [gpt-neo-2.7B-picard](https://storage.henk.tech/KoboldAI/gpt-neo-2.7B-picard.7z) | Novel / 2.7B / Neo Custom | 8GB | 2.0 | Picard is another Novel model, this time exclusively focused on SFW content of various genres. Unlike the name suggests this goes far beyond Star Trek stories and is not exclusively sci-fi. | | [gpt-neo-2.7B-shinen](https://storage.henk.tech/KoboldAI/gpt-neo-2.7B-shinen.7z) | Novel / 2.7B / Neo Custom | 8GB | 2.0 | The most NSFW of them all, Shinen WILL make things sexual. This model will assume that whatever you are doing is meant to be a sex story and will sexualize constantly. It is designed for people who find Horni to tame. It was trained on SexStories instead of Literotica and was trained on tags making it easier to guide the AI to the right context. | | [GPT-J-6B (Converted)](https://storage.henk.tech/KoboldAI/gpt-j-6b.7z) | Generic / 6B / Neo Custom | 16GB | 1.1 | This is the basis for all the other GPT-J-6B models, it has been trained on The Pile and is an open alternative for GPT Curie. Because it is a generic model it is not particularly good at anything and needs a long introduction to understand what you want to do. It is however the most flexible because it has no bias. If you want to do something that has no specific model available, such as writing a webpage article or coding this can be a good one to try. This specific version was converted by our community to be able to run as a GPT-Neo model on your GPU. | -| [AID-16Bit](https://storage.henk.tech/KoboldAI/aid-16bit.zip) | Adventure / GPT-2 Custom | 8GB | 2.0 | The original AI Dungeon Classic model converted to Pytorch and then converted to a 16-bit Model making it half the size. | -| [model_v5_pytorch](https://storage.henk.tech/KoboldAI/model_v5_pytorch.zip) (AI Dungeon's Original Model) | Adventure / GPT-2 Custom | 16GB | 2.0 | This is the original AI Dungeon Classic model converted to the Pytorch format compatible with AI Dungeon Clover and KoboldAI. We consider this model inferior to the GPT-Neo version because it has more artifacting due to its conversion. This is however the most authentic you can get to AI Dungeon Classic. If you have this much VRAM we strongly recommend using Adventure 6B instead for much better results it has better tuning and more general knowledge than this model. | +| [AID-16Bit](https://storage.henk.tech/KoboldAI/aid-16bit.zip) | Adventure / 1.5B / GPT-2 Custom | 4GB | 2.0 | The original AI Dungeon Classic model converted to Pytorch and then converted to a 16-bit Model making it half the size. | +| [model_v5_pytorch](https://storage.henk.tech/KoboldAI/model_v5_pytorch.zip) (AI Dungeon's Original Model) | Adventure / 1.5B / GPT-2 Custom | 8GB | 2.0 | This is the original AI Dungeon Classic model converted to the Pytorch format compatible with AI Dungeon Clover and KoboldAI. We consider this model inferior to the GPT-Neo version because it has more artifacting due to its conversion. This is however the most authentic you can get to AI Dungeon Classic. | | [Novel 774M](https://storage.henk.tech/KoboldAI/Novel%20model%20774M.rar) | Novel / 774M / GPT-2 Custom | 4GB | 2.0 | Novel 774M is made by the AI Dungeon Clover community, because of its small size and novel bias it is more suitable for CPU players that want to play with speed over substance or players who want to test a GPU with a low amount of VRAM. These performance savings are at the cost of story quality and you should not expect the kind of in depth story capabilities that the larger models offer. It was trained for SFW stories. | | [Smut 774M](https://storage.henk.tech/KoboldAI/Smut%20model%20774M%2030K.rar) | Novel / 774M / GPT-2 Custom | 4GB | 2.0 | The NSFW version of the above, its a smaller GPT-2 based model made by the AI Dungeon Clover community. Gives decent speed on a CPU at the cost of story quality like the other 774M models. | -| [Mia](https://storage.henk.tech/KoboldAI/Mia.7z) | Adventure / 125M / Neo Custom | 2GB | 2.0 | Mia is the smallest Adventure model, it runs at very fast speeds on the CPU which makes it a good testing model for developers who do not have GPU access. Because of its small size it will constantly attempt to do actions on behalf of the player and it will not produce high quality stories. If you just need a small model for a quick test, or if you want to take the challenge of trying to run KoboldAI entirely on your phone this would be an easy model to use due to its small RAM requirements and fast (loading) speeds. | +| [Mia](https://storage.henk.tech/KoboldAI/Mia.7z) | Adventure / 125M / Neo Custom | 1GB | 2.0 | Mia is the smallest Adventure model, it runs at very fast speeds on the CPU which makes it a good testing model for developers who do not have GPU access. Because of its small size it will constantly attempt to do actions on behalf of the player and it will not produce high quality stories. If you just need a small model for a quick test, or if you want to take the challenge of trying to run KoboldAI entirely on your phone this would be an easy model to use due to its small RAM requirements and fast (loading) speeds. |