Commit Graph

3668 Commits

Author SHA1 Message Date
somebody
56bcd32f6d Model: Fix ReadOnly
whoops
2023-02-26 12:40:20 -06:00
somebody
a73804ca1e Accelerate: Remove HAS_ACCELERATE
Accelerate has been a dependency for a while, and as such we probably
shouldn't be lugging around code that assumes it isn't present.
2023-02-26 12:18:06 -06:00
somebody
5e3b0062ee Model: Tokenizer fix 2023-02-26 12:17:49 -06:00
somebody
8d49b5cce1 Model: More TPU stuff 2023-02-26 12:09:26 -06:00
somebody
d53d2bcc45 Model: Fix crash on full GPU load 2023-02-25 18:19:46 -06:00
somebody
465e22fa5c Model Fix bugs and introduce hack for visualization
Hopefully I remove that attrocity before the PR
2023-02-25 18:12:49 -06:00
somebody
ffe4f25349 Model: Work on stoppers and stuff 2023-02-25 17:12:16 -06:00
somebody
6b4905de30 Model: Port rest of models over
Generation's still broke but it's a start
2023-02-25 16:05:56 -06:00
somebody
f8c4158ebc Model: Successful load implementation
The goal of this series of commits is to have an implementation-agnostic
interface for models, thus being less reliant on HF Transformers for model
support. A model object will have a method for generation, a list of callbacks
to be run on every token generation, a list of samplers that will modify
probabilities, etc. Basically anything HF can do should be easily
implementable with the new interface :^)

Currently I've tested the loading of pre-downloaded models with
breakmodel between GPUs and that works, though essentially no testing
has been done in the larger scheme of things. Currently this is about
the only supported configuration, and generation isn't very functional.
2023-02-24 21:41:44 -06:00
henk717
0e7b2c1ba1 Merge pull request #293 from jojorne/jojorne-patch-save-load-story-with-UTF-8-encoding
Save/Load Story with UTF-8 encoding.
2023-02-22 19:59:33 +01:00
jojorne
d3bedfcbda Include koboldai_vars.save_story(). 2023-02-22 15:42:56 -03:00
jojorne
d6c9f5f1f5 Save/Load Story with UTF-8 encoding. 2023-02-22 14:40:42 -03:00
henk717
72135f7156 Merge pull request #292 from YellowRoseCx/united
updates PyTorch to allow AMD gpu splitting
2023-02-21 01:52:46 +01:00
YellowRoseCx
83bd1f0db6 updates PyTorch to allow AMD gpu splitting
Feature adding update for AMD GPUs:
Changes the installed PyTorch version from 1.12.1+rocm5.1.1 to the newer stable version 1.13.1+rocm5.2 to allow AMD GPUs to utilize VRAM splitting across multiple GPUs
2023-02-20 18:40:21 -06:00
Henk
b3083034ea UI2 Botname 2023-02-19 17:12:14 +01:00
Henk
9e6a5db745 UI1 Botname 2023-02-19 16:22:26 +01:00
henk717
93f313d6e3 Merge pull request #291 from pi6am/fix/save-as
Fix an exception using Save As from the classic UI
2023-02-19 13:21:10 +01:00
Llama
117f0659c3 Fix exception using Save As from the classic UI
The `saveas` method was modified to take a data dict but one of the
else blocks still referred to the previous `name` parameter. Assign
to `name` to fix the `NameError: name 'name' is not defined` exception.
2023-02-18 23:41:17 -08:00
Henk
cd566caf20 Revision Fixes (Removes the workaround) 2023-02-19 00:51:50 +01:00
henk717
886153f0b7 Merge pull request #286 from YellowRoseCx/united
Added tooltip to WI noun section - fixed file location
2023-02-18 19:05:41 +01:00
henk717
3358d1dc53 Merge pull request #289 from jojorne/jojorne-patch-keep-genseqs-visible-on-submit-memory
Don't hide genseqs on submit memory
2023-02-18 19:01:33 +01:00
Henk
aa6ce9088b Fix vscode artifacting 2023-02-18 18:47:15 +01:00
Henk
a9a724e38c Merge branch 'main' into united 2023-02-18 18:14:03 +01:00
YellowRoseCx
7daa927fbb Merge branch 'henk717:united' into united 2023-02-17 15:57:15 -06:00
jojorne
150d0ea695 Don't hide genseqs on submit memory. 2023-02-17 05:49:07 -03:00
Llama
54dd4abeb8 Merge pull request #19 from henk717/united
Merge united
2023-02-17 00:27:44 -08:00
Henk
e905f2db2d More API stuff to User 2023-02-16 02:50:52 +01:00
Henk
f50f5b530a Move OAI to User Settings 2023-02-16 02:48:20 +01:00
YellowRoseCx
00a8806e0d Added tooltip to WI noun section 2023-02-13 20:36:41 -06:00
henk717
3752ad1c2e Merge pull request #282 from ebolam/UI2
Fix for UI1 upload bug
2023-02-13 21:04:29 +01:00
ebolam
c8059ae07e Fix for UI1 upload bug 2023-02-13 14:35:55 -05:00
Henk
ec3ed9b4d9 Don't save TQDM 2023-02-12 04:29:36 +01:00
henk717
d7cce6473a Merge pull request #279 from ebolam/UI2
Chat mode stopper fix, Class Split Adjustments
2023-02-11 23:58:54 +01:00
henk717
5f108bdf98 Merge pull request #280 from SillyLossy/booleanforunited
Fix boolean API schemas name clash
2023-02-11 16:06:50 +01:00
--global
a680d2baa5 Fix boolean API schemas name clash 2023-02-11 15:11:35 +02:00
Henk
cc01ad730a Don't install safetensors for MTJ 2023-02-11 11:20:21 +01:00
Henk
86ab741481 Don't install safetensors for MTJ 2023-02-11 11:17:15 +01:00
ebolam
89e98191a0 Class Split Fix 2023-02-10 16:30:45 -05:00
ebolam
426041f1cf Class Fix 2023-02-10 16:22:37 -05:00
ebolam
606137f901 Class split fix for status bar 2023-02-10 16:17:03 -05:00
ebolam
d2ff58c2dd Chat mode stopper modified to only stop, not trim output (using Henky's regex for that) 2023-02-10 16:10:23 -05:00
Henk
b58daa1ba1 Pin Flask-cloudflared 2023-02-10 19:11:13 +01:00
Henk
84452d1b11 Pin Flaskcloudflared 2023-02-10 19:01:07 +01:00
Henk
503e7e780e Cleanup bridge on Colab (To prevent future bans) 2023-02-09 23:49:13 +01:00
Henk
cd63882299 Fix an invalid crash in the horde bridge 2023-02-09 23:40:09 +01:00
ebolam
4439e1d637 Chat Mode Stopper Test 2023-02-09 11:40:00 -05:00
ebolam
0487b271b8 Merge branch 'UI2' of https://github.com/ebolam/KoboldAI into UI2 2023-02-09 10:54:48 -05:00
ebolam
b29d4cef29 Add basic authentication option for webUI image generation 2023-02-09 10:54:38 -05:00
ebolam
709bd634b3 Merge pull request #365 from ebolam/chatmodestoppertest
Fix for chat mode stopper
2023-02-09 10:52:34 -05:00
Llama
2567a061ca Merge pull request #18 from henk717/united
Merge united: Unblock themes
2023-02-05 00:06:46 -08:00