Go to file
henk717 bf439f0e15
Revamped Install Experience
This changes the installation script to use Miniconda3 inside the KoboldAI directory, this is MUCH more user friendly for the users.
Any existing python environment will be bypassed, and other dependencies like CUDA automatically installed with compatible versions.
With this approach we can better ensure that end users have the correct environment and won't run into other issues because of their existing installations, it also prevents the need for them to install anything else on their system as anything required is automatically downloaded.
2021-06-06 15:01:11 +02:00
static Moved files from cdn to static directory 2021-05-31 16:30:40 +02:00
stories Replaced easygui with tkinter to address file prompts appearing beneath game window 2021-05-05 11:18:24 -04:00
templates Moved files from cdn to static directory 2021-05-31 16:30:40 +02:00
.gitignore Add gitignore to ignore client settings file and stories besides the test_story 2021-05-02 19:47:28 -04:00
aiserver.py escape the prompt too 2021-06-02 21:23:36 +02:00
fileops.py Added OpenAI API support 2021-05-22 05:28:40 -04:00
gensettings.py Added option to generate multiple responses per action. 2021-05-29 05:46:03 -04:00
install_requirements.bat Revamped Install Experience 2021-06-06 15:01:11 +02:00
play.bat Added OpenAI API support 2021-05-22 05:28:40 -04:00
readme.txt Bugfixes: 2021-05-17 20:28:18 -04:00
requirements.txt Replaced easygui with tkinter to address file prompts appearing beneath game window 2021-05-05 11:18:24 -04:00
UPDATE YOUR COLAB NOTEBOOK.txt Added option to generate multiple responses per action. 2021-05-29 05:46:03 -04:00
utils.py Added OpenAI API support 2021-05-22 05:28:40 -04:00

Thanks for checking out the KoboldAI Client! Get support and updates on the subreddit:
https://www.reddit.com/r/KoboldAI/

[ABOUT]

This is a browser-based front-end for AI-assisted writing with multiple local & remote AI models. 
It offers the standard array of tools, including Memory, Author's Note, World Info, Save & Load, 
adjustable AI settings, formatting options, and the ability to import exising AI Dungeon adventures.
Current UI Snapshot: https://imgur.com/mjk5Yre

For local generation, KoboldAI uses Transformers (https://huggingface.co/transformers/) to interact 
with the AI models. This can be done either on CPU, or GPU with sufficient hardware. If you have a 
high-end GPU with sufficient VRAM to run your model of choice, see 
(https://www.tensorflow.org/install/gpu) for instructions on enabling GPU support.

Transformers/Tensorflow can still be used on CPU if you do not have high-end hardware, but generation
times will be much longer. Alternatively, KoboldAI also supports utilizing remotely-hosted models. 
The currently supported remote APIs are InferKit and Google Colab, see the dedicated sections below 
for more info on these.

[SETUP]

1. Install a 64-bit version of Python.
	(Development was done on 3.7, I have not tested newer versions)
	Windows download link: https://www.python.org/ftp/python/3.7.9/python-3.7.9-amd64.exe
2. When installing Python make sure "Add Python to PATH" is selected.
	(If pip isn't working, run the installer again and choose Modify to choose Optional features.)
3. Run install_requirements.bat.
	(This will install the necessary python packages via pip)
4. Run play.bat
5. Select a model from the list. Flask will start and give you a message that it's ready to connect.
6. Open a web browser and enter http://127.0.0.1:5000/

[ENABLE COLORS IN WINDOWS 10 COMMAND LINE]

If you see strange numeric tags in the console output, then your console of choice does not have
color support enabled. On Windows 10, you can enable color support by lanching the registry editor
and adding the REG_DWORD key VirtualTerminalLevel to Computer\HKEY_CURRENT_USER\Console and setting
its value to 1.

[ENABLE GPU FOR SUPPORTED VIDEO CARDS]

1. Install NVidia CUDA toolkit from https://developer.nvidia.com/cuda-10.2-download-archive
2. Visit PyTorch's website(https://pytorch.org/get-started/locally/) and select Pip under "Package" 
and your version of CUDA under "Compute Platform" (I linked 10.2) to get the pip3 command.
3. Copy and paste pip3 command into command prompt to install torch with GPU support

Be aware that when using GPU mode, inference will be MUCH faster but if your GPU doesn't have enough 
VRAM to load the model it will crash the application.

[IMPORT AI DUNGEON GAMES]

To import your games from AI Dungeon, first grab CuriousNekomimi's AI Dungeon Content Archive Toolkit:
https://github.com/CuriousNekomimi/AIDCAT
Follow the video instructions for getting your access_token, and run aidcat.py in command prompt.
Choose option [1] Download your saved content.
Choose option [2] Download your adventures.
Save the JSON file to your computer using the prompt.
Run KoboldAI, and after connecting to the web GUI, press the Import button at the top.
Navigate to the JSON file exported from AIDCAT and select it. A prompt will appear in the GUI 
presenting you with all Adventures scraped from your AI Dungeon account.
Select an Adventure and click the Accept button.

[HOST GPT-NEO ON GOOGLE COLAB]

If your computer does not have an 8GB GPU to run GPT-Neo locally, you can now run a Google Colab
notebook hosting a GPT-Neo-2.7B model remotely and connect to it using the KoboldAI client.
See the instructions on the Colab at the link below:
https://colab.research.google.com/drive/1uGe9f4ruIQog3RLxfUsoThakvLpHjIkX?usp=sharing

[FOR INFERKIT INTEGRATION]

If you would like to use InferKit's Megatron-11b model, sign up for a free account on their website.
https://inferkit.com/
After verifying your email address, sign in and click on your profile picture in the top right.
In the drop down menu, click "API Key".
On the API Key page, click "Reveal API Key" and copy it. When starting KoboldAI and selecting the
InferKit API model, you will be asked to paste your API key into the terminal. After entering,
the API key will be stored in the client.settings file for future use.
You can see your remaining budget for generated characters on their website under "Billing & Usage".