Commit Graph

379 Commits

Author SHA1 Message Date
henk717 d7edd9d04b
Merge pull request #33 from VE-FORBRYDERNE/loader
Fix a typo in requirements_mtj.txt
2021-11-21 04:09:12 +01:00
Gnome Ann 68e4b66fc5 Fix a typo in requirements_mtj.txt 2021-11-19 22:28:34 -05:00
henk717 409be6645a Finetune version of rocm
Seperate file so people can easily go back to the legacy implementation based on finetune (Recommended until Huggingface's compatibility is improved) . You can install and use both.
2021-11-20 03:14:18 +01:00
henk717 50defbaa04
Merge pull request #32 from VE-FORBRYDERNE/loader
Move the TPU backend code into this repository
2021-11-20 01:01:18 +01:00
Gnome Ann 286ed51534 Add a requirements.txt for TPU backend 2021-11-19 18:20:02 -05:00
Gnome Ann a65c4de840 Integrate TPU backend
This commit puts the TPU backend code directly in to the KoboldAI code
to make it easier to modify.
2021-11-19 18:06:57 -05:00
henk717 b926170fb0
Merge branch 'KoboldAI:main' into united 2021-11-19 00:05:21 +01:00
henk717 4e791b2f2d
Merge pull request #82 from VE-FORBRYDERNE/editor
Fix some editor issues in Firefox and possibly mobile browsers
2021-11-19 00:04:31 +01:00
Gnome Ann bb51198f40 Fix some editor issues in Firefox and possibly mobile browsers
When Firefox 93.0 was released, they broke the ability to edit text
across multiple chunks or across multiple paragraphs. If you tried,
nothing would happen.

Also, we are no longer using Mutation Observers to detect when a chunk
is modified. We are now using the beforeinput event.
2021-11-18 13:18:18 -05:00
henk717 4a678deaa5
Merge branch 'KoboldAI:main' into united 2021-11-18 06:51:44 +01:00
henk717 9b73d6a913
Merge pull request #81 from VE-FORBRYDERNE/patch
Replace slashes in model name with underscores
2021-11-18 06:51:21 +01:00
henk717 b25c54cf91 Polishing and Optimizations
Multiple things have changed, for now models default to half mode even on the official transformers to make sure its as efficient on the GPU as finetune's. GPU selection is streamlined and cache files are now stored inside the KoboldAI folder (for the most part). A new command line parameter to force the models to run at their full size still needs to be added for the few users that would want a quality bump at the cost of ram.
2021-11-18 00:06:57 +01:00
henk717 27ee45b9cc
Merge pull request #31 from VE-FORBRYDERNE/cpu
Fix gen_in device logic in generate()
2021-11-17 22:42:31 +01:00
Gnome Ann 2f0b673b28 Fix gen_in device logic in generate() 2021-11-17 16:37:37 -05:00
henk717 e71271933a
Merge pull request #29 from VE-FORBRYDERNE/hidden-size
Fix hidden size detection for GPTJForCausalLM
2021-11-17 22:30:24 +01:00
henk717 26eb2cb6ce
Merge pull request #30 from VE-FORBRYDERNE/dynamic-scan
Support for multiple gens per action with dynamic scan
2021-11-17 22:30:12 +01:00
Gnome Ann a1bc10246c Support for multiple gens per action with dynamic scan 2021-11-17 16:17:59 -05:00
henk717 485034b6bb ROCm Conda
Allows anyone to easily create a ROCm compatible conda environment. Currently set to the newer transformers, you can edit the github link if you want the finetune one.
2021-11-17 22:15:01 +01:00
Gnome Ann 98a72e34a4 Replace slashes in model name with underscores 2021-11-17 15:36:36 -05:00
Gnome Ann ab1a65f13a Fix hidden size detection for GPTJForCausalLM 2021-11-15 11:56:02 -05:00
henk717 ffdc5fc276
Merge pull request #28 from VE-FORBRYDERNE/gpu
Use the old GPU generation mode when all layers are on one GPU
2021-11-15 07:33:48 +01:00
Gnome Ann 17d07b280a Correct `gpu_layers` to `gpu_blocks` 2021-11-14 21:08:49 -05:00
Gnome Ann 805cb0c8b9 Make sure device_config() still works with all layers on CPU 2021-11-14 18:46:00 -05:00
Gnome Ann 80aee07816 Use old GPU-only generation if all layers are on the same GPU
Apparently, this mode uses less RAM than breakmodel does.
2021-11-14 18:42:18 -05:00
Gnome Ann b0ab30cec4 Re-enable GPU-only generation option 2021-11-14 18:24:51 -05:00
henk717 3e38b462c6 Hidden Size fix for GPT2 Custom
Replaced the JS Hidden Size load with the newer function to fix these models
2021-11-14 16:40:04 +01:00
henk717 f227a876c0
Merge pull request #27 from VE-FORBRYDERNE/united
Merge branch 'main' into united
2021-11-14 03:59:26 +01:00
Gnome Ann 21b19b81dd Merge branch 'main' into united 2021-11-13 21:58:27 -05:00
henk717 7b47a8457a
Merge pull request #80 from VE-FORBRYDERNE/main
Improved Unix Support
2021-11-14 03:56:56 +01:00
henk717 ecea169553 Improved Unix Support
Changes the line-endings to the Unix format and sets KoboldAI to launch with Python3 if executed directly.

(cherry picked from commit 5b0977ceb6807c0f80ce6717891ef5e23c8eeb77)
2021-11-13 21:54:32 -05:00
henk717 1596a238f7 Breakmodel automation
The only changes are a small addition to the breakmodel section where GPU0 is automatically chosen if the CLI options are used without specifying breakmodel. Lineendings have been changed to Linux formatting for compatibility reasons.
2021-11-14 03:13:52 +01:00
henk717 8a916116e3
Remove device=0 because of incompatibility
Device=0 breaks some of the pytorch implementations, removed to restore hardware compatibility to 0.16 levels.
2021-11-14 02:33:27 +01:00
henk717 4bcffc614e
Allow directly running KoboldAI from CLI in Linux
Its made for Python3, so we assume python3 is installed in its usual location. If it isn't you can always run it yourself with whatever command you used prior to this change.
2021-11-14 01:57:43 +01:00
henk717 21ae45e9ab
Merge branch 'KoboldAI:main' into united 2021-11-11 17:05:39 +01:00
henk717 8ad3863854
Merge pull request #26 from VE-FORBRYDERNE/sp-patch
More softprompting bug fixes
2021-11-11 17:05:32 +01:00
henk717 4ebece0a6f
Merge pull request #79 from VE-FORBRYDERNE/broadcast-patch
Don't broadcast emit calls inside do_connect()
2021-11-11 17:05:13 +01:00
Gnome Ann 1fadcbe1e3 Send allowsp command on connect instead of on startup 2021-11-11 00:18:46 -05:00
Gnome Ann 2fe815e092 Don't broadcast emit calls inside do_connect()
This prevents the "thinking" animation from appearing on top of the
submit button under certain circumstances:

* When someone connects to the KoboldAI server while the model is
  generating (occurs after generation finishes)
* Occasionally, the browser may suddenly disconnect and reconnect from
  Flask-SocketIO during generation, which causes the same problem
2021-11-11 00:14:12 -05:00
Gnome Ann 11b0291bc4 Use model.transformer.embed_dim if model.transformer.hidden_size doesn't exist 2021-11-10 17:47:14 -05:00
Gnome Ann 752e19a2bb Fix vars.modeldim not always being set 2021-11-10 17:38:30 -05:00
henk717 e6599db78f
Merge pull request #25 from VE-FORBRYDERNE/united
Merge branch 'main' into united
2021-11-10 03:37:43 +01:00
Gnome Ann 2679df9664 Merge branch 'main' into united 2021-11-09 21:33:14 -05:00
henk717 c2371cf801
Merge pull request #23 from VE-FORBRYDERNE/scan-test
Dynamic world info scan
2021-11-10 03:31:42 +01:00
henk717 d5a26e8c20
Fixed root permissions
The Docker has been changed to no longer run these commands as root, added root permissions for the relevant commands to fix the docker.
2021-11-10 03:17:12 +01:00
henk717 4af0d9dabd
Merge pull request #78 from VE-FORBRYDERNE/patch
Allow remote mode to load from client-side story files
2021-11-06 16:58:05 +01:00
Gnome Ann 02a56945de Version bump 2021-11-06 11:50:56 -04:00
henk717 bc0f9c8032 Allow remote mode to load from client-side story files
(cherry picked from commit a1345263df)
2021-11-06 11:48:20 -04:00
Gnome Ann 7ea6f58b1a Resolve merge conflict 2021-11-05 11:03:50 -04:00
henk717 a1345263df
Merge pull request #22 from VE-FORBRYDERNE/filereader
Allow remote mode to load from client-side story files
2021-11-05 02:30:20 +01:00
Gnome Ann 7a0b0b0d2d Remove debug logging from application.js 2021-11-04 19:36:45 -04:00