ebolam
128c77e0fd
Default model backend to huggingface if not present when loading a model through the command line
2023-05-19 19:01:11 -04:00
ebolam
a1ee6849dc
Custom Paths from Menu structure fixed
2023-05-19 18:28:47 -04:00
ebolam
6df5fe4ad0
partial load model from custom path in menu
2023-05-19 18:24:06 -04:00
ebolam
309f1c432a
Added the ability to disable model backends in the model backend code.
2023-05-19 17:43:13 -04:00
ebolam
b21884fc31
Better error reporting
2023-05-19 17:34:15 -04:00
ebolam
db30402c3b
Move RWKV to use Huggingface model backend
2023-05-19 17:30:36 -04:00
ebolam
756a33c63e
Added try loop on model backend so it will continue with other models.
2023-05-19 17:28:39 -04:00
ebolam
9df1f03b12
Fix for custom huggingface model menu entry
2023-05-19 14:28:36 -04:00
ebolam
7e0778c871
Remove extra debug stuff
2023-05-19 09:14:37 -04:00
ebolam
99cffd4755
Colab GPU edition fixes
2023-05-19 09:11:08 -04:00
ebolam
56d2705f4b
removed breakmodel command line arguments (except nobreakmodel)
2023-05-18 20:19:33 -04:00
ebolam
06f59a7b7b
Moved model backends to separate folders
...
added some model backend settings save/load
2023-05-18 20:14:33 -04:00
ebolam
4040538d34
Model Backends now defined in the menu
2023-05-18 18:34:00 -04:00
ebolam
182ecff202
Added in model backend to the command line arguments
2023-05-18 16:01:17 -04:00
ebolam
f027d8b6e5
Better working valid detection and named model backends for UI
2023-05-17 21:15:31 -04:00
ebolam
c6b17889d0
Updated to latest united
2023-05-12 07:53:27 -04:00
Henk
67df9b917f
Reintroduce 4.29 Transformers
2023-05-12 09:08:07 +02:00
ebolam
aaa9133899
Disk Cache working
...
UI valid marker broken for disk cache
2023-05-11 21:22:33 -04:00
ebolam
69d942c00c
Kind of working breakmodel
2023-05-11 20:22:30 -04:00
Henk
20b54eb9ff
Revert 4.29 due to unforseen consequences
2023-05-11 19:06:39 +02:00
ebolam
4605d10c37
Next iteration. Model Loading is broken completely now :)
2023-05-11 12:08:35 -04:00
ebolam
77dd5aa725
Minor update
2023-05-11 09:09:09 -04:00
Henk
e932364a1e
RWKV support
2023-05-11 14:56:12 +02:00
ebolam
71aee4dbd8
First concept of model plugins with a conceptual UI.
...
Completely breaks UI2 model loading.
2023-05-10 16:30:46 -04:00
somebody
a9e342ca64
Fix TPU API errors
2023-05-08 17:34:59 -05:00
somebody
f02ddab7c7
Merge branch 'united' of https://github.com/henk717/KoboldAI into peft
2023-05-06 10:47:14 -05:00
Henk
2730879c61
Better warning until something more robust is in
2023-05-05 21:28:06 +02:00
Henk
d508b4a319
More max_context_length flexibility
2023-05-05 19:50:56 +02:00
Henk
33969b5845
Basic HF code execution support
2023-05-05 17:23:01 +02:00
somebody
35b56117e6
Basic PEFT support
2023-05-03 18:51:01 -05:00
henk717
7f5242db17
Merge pull request #344 from pi6am/fix/llama-tokens
...
Fix/llama tokens
2023-05-03 19:07:47 +02:00
ebolam
fa3611b994
Update to United
...
Update to United
2023-05-03 10:54:17 -04:00
Llama
3768848548
Fix tokenization and whitespace issues with llama-derived models
...
Work around the 'soft' prefix space behavior of sentencepiece.
Override encode to restore the deleted HF support for decode_with_prefix_space.
Override decode to skip the soft space and return true decoded tokens.
Allow submitting chat messages with embedded newlines.
Split sentences between punctuation and whitespace, rather than after whitespace.
Also include trailing quotes and brackets after sentence stoppers.
This avoids splitting ." and .) into two tokens, for instance.
Insert whitespace at the beginning of the author's note, since sentences are
split with leading whitespace.
Remove spurious newlines at the end of chat responses.
2023-05-03 01:27:11 -07:00
Henk
5d1ee39250
Fix loadmodelsettings
2023-05-03 04:21:37 +02:00
henk717
724ba43dc1
Merge pull request #342 from one-some/model-structure-and-maybe-rwkv
...
Move overrides to better places
2023-05-03 03:34:17 +02:00
somebody
4b3b240bce
Move loadmodelsettings
2023-05-02 20:33:37 -05:00
Henk
480919a2a7
Nicer way of serving lite
2023-05-03 01:16:02 +02:00
Henk
03e10bed82
/lite (Not functional yet)
2023-05-03 01:04:51 +02:00
somebody
c95be636a4
Merge branch 'united' of https://github.com/henk717/KoboldAI into model-structure-and-maybe-rwkv
2023-05-01 17:08:20 -05:00
ebolam
5a32159e58
Remove debug prints
2023-05-01 10:53:02 -04:00
Henk
545f79086d
Ban EOS token in N mode
2023-04-30 18:48:22 +02:00
one-some
19817a271b
More colab
2023-04-28 10:16:15 -05:00
one-some
b3614b64b1
Hello Colab
2023-04-28 10:10:26 -05:00
one-some
f9d162c001
Cut out things until it works
2023-04-27 10:10:17 -05:00
henk717
b0bbdc0c29
Merge pull request #333 from ebolam/united
...
New Editor
2023-04-27 15:52:13 +02:00
onesome
9579298df7
Better fallback
2023-04-25 22:28:07 -05:00
onesome
98cd6aa246
Make RWKV experimental
2023-04-25 17:13:06 -05:00
onesome
b8bef641ff
Merge branch 'united' of https://github.com/henk717/KoboldAI into model-structure-and-maybe-rwkv
2023-04-25 16:54:53 -05:00
Henk
9eaa2aba47
Isolate OPT Tokenizer Fix to OPT models
2023-04-25 22:49:56 +02:00
ebolam
d7c46f668c
fix tab vs space error
2023-04-21 14:11:51 -04:00