0cc4m
|
c8d00b7a10
|
Add CPU offloading support for GPT-NeoX, GPT-J and OPT
|
2023-04-02 18:36:31 +02:00 |
|
0cc4m
|
e742083703
|
Fix multi-gpu-offloading
|
2023-04-02 11:17:29 +02:00 |
|
0cc4m
|
2729b77640
|
Add offload.py adapted from llama_inference_offload.py, with multi-gpu support and some improvements. Not yet functional, and still just supports Llama
|
2023-04-02 10:32:19 +02:00 |
|
0cc4m
|
110f8229c5
|
Add cudatoolkit-dev for compilation, compatible gcc 9 and update transformers to fix error in gptq
|
2023-04-01 21:33:05 +02:00 |
|
0cc4m
|
bf0c999412
|
Update GPTQ to support AMD
|
2023-04-01 14:19:51 +02:00 |
|
0cc4m
|
d3a5ca6505
|
Update gptq submodule to latest
|
2023-04-01 08:52:08 +00:00 |
|
0cc4m
|
6eae457479
|
Fix 4bit groupsize param letter
Use g instead of b for groupsize name, for example 4bit-128g.safetensors
|
2023-03-31 15:36:03 +02:00 |
|
0cc4m
|
aa2292b3a4
|
Enable multi-gpu support
|
2023-03-30 19:40:49 +02:00 |
|
0cc4m
|
61b13604b6
|
Fix bug in 4-bit load fallback
|
2023-03-30 10:57:04 +02:00 |
|
0cc4m
|
9d0477f5f7
|
Fix bug where it picks old model despite new one available
|
2023-03-29 22:05:44 +00:00 |
|
0cc4m
|
73d5ec0e5d
|
Pull latest gptq-changes
|
2023-03-29 20:07:26 +00:00 |
|
0cc4m
|
a0bc770426
|
Add basic groupsize support
Write groupsize into filename, for example 4bit-128b.safetensors for groupsize 128
|
2023-03-29 19:49:05 +00:00 |
|
0cc4m
|
f6f7687cc0
|
Add 4bit safetensor support, improve loading code
|
2023-03-29 14:47:59 +00:00 |
|
0cc4m
|
8d008b87a6
|
Add OPT support
|
2023-03-29 13:27:11 +00:00 |
|
0cc4m
|
ef6fe680a9
|
Fix high VRAM usage caused by workaround for scalar type error
|
2023-03-28 06:30:02 +00:00 |
|
0cc4m
|
0f1fc46078
|
Fix errors during inference
|
2023-03-27 21:30:43 +00:00 |
|
0cc4m
|
d1a2005a27
|
Add support for old and new 4-bit format. Old one needs 4bit-old.pt file to launch
|
2023-03-27 20:45:21 +00:00 |
|
0cc4m
|
2e7a8a1a66
|
Adapt KoboldAI to latest gptq changes
|
2023-03-27 04:48:21 +00:00 |
|
0cc4m
|
9dcba38978
|
Pin transformers to a working Llama-compatible version
|
2023-03-24 19:07:28 +00:00 |
|
0cc4m
|
026eb3205e
|
Fix 4-bit loading error when not loading in 4-bit
|
2023-03-22 22:12:06 +00:00 |
|
0cc4m
|
8941428c66
|
Fix Kobold loading to CPU in 4-bit, causing CUDA ASSERT error
|
2023-03-22 06:22:34 +00:00 |
|
0cc4m
|
c7edc764b9
|
Fix llama loading
|
2023-03-21 21:58:31 +00:00 |
|
0cc4m
|
ecd065a881
|
Overhaul 4-bit support to load with a toggle
|
2023-03-21 21:40:59 +00:00 |
|
0cc4m
|
4cfc1219d4
|
Add gptq as submodule
|
2023-03-20 19:13:46 +00:00 |
|
0cc4m
|
3b7505dc28
|
Merge remote-tracking branch 'united/united' into 4bit
|
2023-03-20 19:06:40 +00:00 |
|
0cc4m
|
858657f669
|
Fix zipfile folder identification fix for Windows
|
2023-03-20 09:16:30 +01:00 |
|
0cc4m
|
60acf59316
|
Improve 4-bit llama support, add 4-bit gptj and gptneox support
|
2023-03-19 21:20:13 +00:00 |
|
Henk
|
90a7eb6153
|
LLama tokenizer settings
|
2023-03-17 12:40:08 +01:00 |
|
Henk
|
1235b71bb5
|
Merge branch 'main' into united
|
2023-03-17 01:48:10 +01:00 |
|
Henk
|
219b824b9b
|
SocketIO Requirements Pin
|
2023-03-17 01:28:59 +01:00 |
|
henk717
|
86c87b23c0
|
Merge branch 'KoboldAI:main' into united
|
2023-03-17 01:17:58 +01:00 |
|
0cc4m
|
5d17692c79
|
Remove except Exception so that errors actually show up
|
2023-03-16 05:24:58 +00:00 |
|
YellowRoseCx
|
b3b454bbe4
|
Update huggingface.yml
|
2023-03-15 00:03:43 -05:00 |
|
YellowRoseCx
|
bf677a32f6
|
Merge remote-tracking branch 'catboxanon/test/4bit' into yr4bit
|
2023-03-14 17:09:06 -05:00 |
|
YellowRoseCx
|
2909910bcc
|
Merge branch 'henk717:united' into dev-yr
|
2023-03-14 17:04:33 -05:00 |
|
henk717
|
db7b53f52d
|
Merge pull request #310 from nkpz/united
Fix out of range error after editing actions
|
2023-03-14 01:22:14 +01:00 |
|
catboxanon
|
5f3770bb58
|
Merge branch 'henk717:united' into test/4bit
|
2023-03-13 19:34:17 -04:00 |
|
henk717
|
5249045c35
|
Merge pull request #304 from YellowRoseCx/united-yr
added local rng_states variable and fixed minor typo
|
2023-03-13 22:46:00 +01:00 |
|
henk717
|
c96e96f95e
|
Merge pull request #307 from jojorne/jojorne-patch-fix-save-loading-with-wi-features
Fix save loading between v1 and v2 to v3 with wi features
|
2023-03-13 21:40:11 +01:00 |
|
Henk
|
8da04a98a4
|
Better Runtime Isolation
|
2023-03-13 18:41:25 +01:00 |
|
Nick Perez
|
0dce4c700f
|
Just reverse the range
|
2023-03-13 07:00:51 -04:00 |
|
Nick Perez
|
b4b24f1389
|
Fix out of range after deletion in for loop
|
2023-03-13 06:21:25 -04:00 |
|
jojorne
|
4b8d4cde7d
|
fix spacing
|
2023-03-12 20:41:34 -03:00 |
|
jojorne
|
4219e3e8d3
|
Removing the root folder is not supported
|
2023-03-12 20:38:58 -03:00 |
|
jojorne
|
e5c1b0506a
|
Renaming the root folder is not supported
|
2023-03-12 20:05:40 -03:00 |
|
catboxanon
|
bde31217f1
|
improve model None check
|
2023-03-11 12:15:58 -05:00 |
|
catboxanon
|
1808b0d2ec
|
Another safety check for if model is not loaded
|
2023-03-11 12:13:22 -05:00 |
|
jojorne
|
53f06903c2
|
revert more unrelated code
|
2023-03-11 13:54:01 -03:00 |
|
jojorne
|
c87ef60db1
|
revert more unrelated code
|
2023-03-11 13:48:41 -03:00 |
|
jojorne
|
e4ad8547a7
|
revert unrelated code
|
2023-03-11 13:39:19 -03:00 |
|