ebolam
a1ee6849dc
Custom Paths from Menu structure fixed
2023-05-19 18:28:47 -04:00
ebolam
6df5fe4ad0
partial load model from custom path in menu
2023-05-19 18:24:06 -04:00
ebolam
9df1f03b12
Fix for custom huggingface model menu entry
2023-05-19 14:28:36 -04:00
ebolam
36e679b366
Merge branch 'Model_Plugins' of https://github.com/ebolam/KoboldAI into Model_Plugins
2023-05-19 09:11:22 -04:00
ebolam
99cffd4755
Colab GPU edition fixes
2023-05-19 09:11:08 -04:00
ebolam
3db231562f
Merge pull request #382 from henk717/united
...
Update to united
2023-05-19 06:05:25 -04:00
ebolam
56d2705f4b
removed breakmodel command line arguments (except nobreakmodel)
2023-05-18 20:19:33 -04:00
ebolam
06f59a7b7b
Moved model backends to separate folders
...
added some model backend settings save/load
2023-05-18 20:14:33 -04:00
ebolam
4040538d34
Model Backends now defined in the menu
2023-05-18 18:34:00 -04:00
ebolam
182ecff202
Added in model backend to the command line arguments
2023-05-18 16:01:17 -04:00
ebolam
f027d8b6e5
Better working valid detection and named model backends for UI
2023-05-17 21:15:31 -04:00
Henk
59c96b5b7a
Unban fix
2023-05-15 22:38:12 +02:00
Henk
c5100b4eab
Unban Tensor
2023-05-15 22:21:22 +02:00
Henk
56443bc7ea
Unban torch._tensor._rebuild_tensor_v2
2023-05-15 21:44:01 +02:00
Henk
205c64f1ea
More universal pytorch folder detection
2023-05-13 20:26:55 +02:00
ebolam
c6b17889d0
Updated to latest united
2023-05-12 07:53:27 -04:00
ebolam
aaa9133899
Disk Cache working
...
UI valid marker broken for disk cache
2023-05-11 21:22:33 -04:00
ebolam
a6f0e97ba0
Working(?) breakmodel
2023-05-11 20:40:05 -04:00
ebolam
69d942c00c
Kind of working breakmodel
2023-05-11 20:22:30 -04:00
somebody
3065c1b40e
Ignore missing keys in get_original_key
2023-05-11 17:10:43 -05:00
somebody
c16336f646
Add traceback to debug log on fallback
2023-05-11 17:10:19 -05:00
ebolam
a9c785d0f0
Fix for Horde
2023-05-11 14:20:14 -04:00
ebolam
e9c845dc2a
Fix for badwordIDs
2023-05-11 14:14:52 -04:00
ebolam
4605d10c37
Next iteration. Model Loading is broken completely now :)
2023-05-11 12:08:35 -04:00
Henk
edd9c7d782
Warning polish
2023-05-11 15:13:59 +02:00
ebolam
77dd5aa725
Minor update
2023-05-11 09:09:09 -04:00
Henk
e932364a1e
RWKV support
2023-05-11 14:56:12 +02:00
ebolam
71aee4dbd8
First concept of model plugins with a conceptual UI.
...
Completely breaks UI2 model loading.
2023-05-10 16:30:46 -04:00
Bogdan Drema
d53726bed6
fix: tpu tokenizers errors
2023-05-08 18:24:34 +01:00
Henk
bb206f598e
Don't load peft when unused
2023-05-06 18:55:26 +02:00
somebody
b7db709c47
PEFT: Change directory structure to be inside model
2023-05-06 11:16:09 -05:00
somebody
f02ddab7c7
Merge branch 'united' of https://github.com/henk717/KoboldAI into peft
2023-05-06 10:47:14 -05:00
Henk
33969b5845
Basic HF code execution support
2023-05-05 17:23:01 +02:00
somebody
35b56117e6
Basic PEFT support
2023-05-03 18:51:01 -05:00
Henk
a87d5d6f23
Remove HF's llama workaround
2023-05-03 20:18:40 +02:00
Llama
35d344b951
Remove torch dependency and more generic dim0 workaround
...
Remove torch dependency from hf.py
Make workaround for dimension zero values of token_ids
more generic to handle every token, not just newlines.
2023-05-03 09:48:16 -07:00
Llama
3768848548
Fix tokenization and whitespace issues with llama-derived models
...
Work around the 'soft' prefix space behavior of sentencepiece.
Override encode to restore the deleted HF support for decode_with_prefix_space.
Override decode to skip the soft space and return true decoded tokens.
Allow submitting chat messages with embedded newlines.
Split sentences between punctuation and whitespace, rather than after whitespace.
Also include trailing quotes and brackets after sentence stoppers.
This avoids splitting ." and .) into two tokens, for instance.
Insert whitespace at the beginning of the author's note, since sentences are
split with leading whitespace.
Remove spurious newlines at the end of chat responses.
2023-05-03 01:27:11 -07:00
henk717
724ba43dc1
Merge pull request #342 from one-some/model-structure-and-maybe-rwkv
...
Move overrides to better places
2023-05-03 03:34:17 +02:00
somebody
a0f4ab5c6a
Move bad token grabber until after newlinemode has been deduced
2023-05-02 20:23:36 -05:00
somebody
efe268df60
Move overrides to better places
2023-05-02 20:18:33 -05:00
Henk
de7b760048
Typo Fix
2023-05-03 01:02:50 +02:00
somebody
111028642e
Fix tokenizer fallback for llama
2023-05-01 19:42:52 -05:00
somebody
f6b5548131
Support safetensors in get_sharded_checkpoint_num_tensors
2023-05-01 19:15:27 -05:00
somebody
97e84928ba
Download all shards correctly on aria2 and raise on bad load key
2023-05-01 18:53:36 -05:00
somebody
933dbd634a
HFInferenceModel: Make badwordsids not unique to torch
2023-05-01 17:13:33 -05:00
somebody
ce3d465972
Remove some debug
2023-05-01 17:03:34 -05:00
one-some
455b8257a9
Implement softprompt hack
2023-04-28 10:26:59 -05:00
somebody
ace4364339
Two more time
2023-04-27 21:13:26 -05:00
somebody
446f38ee9d
One more time
2023-04-27 21:07:34 -05:00
somebody
2eee535540
Actually fix decoding with soft prompts
...
it really wants a tensor
2023-04-27 21:01:12 -05:00