mirror of
https://github.com/KoboldAI/KoboldAI-Client.git
synced 2025-01-10 15:22:59 +01:00
a47e93cee7
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
1 line
16 B
Batchfile
1 line
16 B
Batchfile
play --lowmem %* |