diff --git a/README.md b/README.md index 86661df3..f9be9660 100644 --- a/README.md +++ b/README.md @@ -20,9 +20,11 @@ Next step, (Windows) subfolder mode or B: option doesn't matter choose either [if on Linux] run `commandline.sh` -`commandline.bat/commandline.sh` will put you in KoboldAI's virtual environment (as shown by (base) affixed to the prompt). - +`commandline.bat/commandline.sh` will put you in KoboldAI's virtual environment. +On Windows, this will show (base) at the start of the prompt line. +If it shows (base) on Linux, you most likely have a base conda environment that you need to deactivate (`conda deactivate`) +Then run `cd repos` `cd gptq` @@ -42,7 +44,7 @@ Then move your model folder to KoboldAI/models, and rename the .pt or .safetenso So - your .pt's model folder should look like this: "4bit.pt, config.json, generation_config.json, pytorch_model.bin.index.json, special_tokens_map.json, tokenizer.model, tokenizer_config.json" Note: the 4bit.pt file can be in the same folder as the regular HF .bin files it was quantized from, so long as the 4-bit toggle switch is on, it'll load the quantized model (4-bit switch explained below). -If you haven't done so already, exit the command prompt/leave KAI's (base) venv +If you haven't done so already, exit the command prompt/leave KAI's conda env. (Close the commandline window on Windows, run `exit` on Linux) Run `play.bat` [windows] or `play.sh` [linux]