henk717
ac59e55d62
Smaller optimizations
2022-02-24 01:14:26 +01:00
henk717
8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
...
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann
ad10ac8871
Allow TPU models to specify settings/config in config.json
2022-02-23 18:22:18 -05:00
henk717
7de3311000
Fix sentencepiece model saving
2022-02-23 22:04:41 +01:00
henk717
6151d16df0
Merge pull request #81 from VE-FORBRYDERNE/dematerialized
...
Use dematerialized loading in TPU backend for lower device memory usage
2022-02-23 07:11:26 +01:00
Gnome Ann
7ec549c726
Use dematerialized loading in TPU backend for lower device memory usage
2022-02-22 19:43:13 -05:00
henk717
fd7ba9f70e
Also check for Config in models/
2022-02-22 19:22:08 +01:00
henk717
306d96a8eb
Seperate Drive Disconnect
2022-02-22 18:03:06 +01:00
henk717
a0518edc36
Temporary Transformers Git for XGLM
2022-02-22 02:42:04 +01:00
henk717
74012a24c9
Expose GDrive Models
2022-02-22 02:35:27 +01:00
henk717
9aeae94d0e
Cleanup leakage (Didn't appear in my commit list)
2022-02-22 02:32:02 +01:00
henk717
cb6ccacd64
Dependencies required for newer models
2022-02-21 21:17:12 +01:00
henk717
4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
...
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
henk717
300db651de
Open models folder by default
2022-02-21 00:46:18 +01:00
Gnome Ann
da10e2dc1d
Don't crash if `XGLMSinusoidalPositionalEmbedding` doesn't exist
2022-02-20 17:41:00 -05:00
Gnome Ann
5dc4969173
Temporary fix for XGLM positional embedding issues
2022-02-20 14:17:24 -05:00
henk717
7c678820cd
Exclude Models from our Git
2022-02-20 19:36:14 +01:00
henk717
27cf59bb94
Merge pull request #79 from VE-FORBRYDERNE/xglm-eos
...
Prevent transformers XGLM from stopping generation on `</s>` token
2022-02-20 19:03:51 +01:00
Gnome Ann
a63fa3b067
Prevent transformers XGLM from stopping generation on `</s>` token
2022-02-19 23:15:16 -05:00
henk717
70e0295600
Merge branch 'KoboldAI:main' into united
2022-02-19 23:34:46 +01:00
henk717
acc5804820
Merge pull request #97 from mrseeker/patch-2
...
Add description of Janeway
2022-02-19 23:27:13 +01:00
henk717
ba7f0de0d5
Merge pull request #98 from AngryBeeSec/main
...
Update play-cuda.sh
2022-02-19 23:22:04 +01:00
AngryBeeSec
b6d885cf0a
Update play-cuda.sh
...
Allows the use of newer models
2022-02-19 16:26:20 -05:00
henk717
a47e93cee7
Seperate Low Memory Mode
...
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00
henk717
4c84d731db
Merge branch 'KoboldAI:main' into united
2022-02-18 15:02:24 +01:00
Julius ter Pelkwijk
2b133548be
Add description of Janeway
2022-02-18 14:37:28 +01:00
henk717
90be138ac5
Add Janeway to the GPU Colab
2022-02-18 14:26:29 +01:00
henk717
8e03f1c612
Merge branch 'KoboldAI:main' into united
2022-02-18 14:21:34 +01:00
henk717
f06acb59be
Add the Janeway model
...
New model released by Mr.Seeker
2022-02-18 14:18:41 +01:00
henk717
cba93e29d2
Update aiserver.py
2022-02-18 02:11:08 +01:00
henk717
76a6c124dd
Quiet on Colab
...
Makes the Colab mode also automatically activate the Quiet mode to improve privacy. We should no longer need this in the colab console thanks to the redo feature. Need something different for testing? Use --remote instead.
2022-02-18 02:07:40 +01:00
henk717
02246dfc4d
Remote play improvements
...
Change the proposed --share to --unblock to make it more apparent what this feature does. The feature unblocks the port from external access, but does not add remote play support. For remote play support without a proxy service I have added --host .
2022-02-18 01:08:12 +01:00
henk717
9b72583110
Merge branch 'KoboldAI:main' into united
2022-02-18 00:37:34 +01:00
henk717
e571a17f84
Update readme.md
2022-02-15 20:18:21 +01:00
henk717
a05aef552c
Merge branch 'KoboldAI:main' into united
2022-02-14 18:10:56 +01:00
henk717
ca5b9f968f
Merge pull request #76 from VE-FORBRYDERNE/newline
...
Fix fairseq newline handling issues
2022-02-14 18:10:25 +01:00
henk717
50a96485a9
Fix dm-haiku
...
They did a change that breaks compatibility with our other dependencies, forcing version 0.0.5 to fix this.
2022-02-14 18:05:50 +01:00
Gnome Ann
ec54bc9d9b
Fix typo in `send_debug()`
2022-02-12 20:11:35 -05:00
Gnome Ann
f682c1229a
Fix fairseq newline handling issues
2022-02-12 13:23:59 -05:00
henk717
c1af8f72c3
Merge pull request #75 from ebolam/united
...
Fixed retry bug due to redo/pin code
2022-02-11 03:27:51 +01:00
ebolam
633152ee84
Fixed Retry bug due to redo/pin code
2022-02-10 10:01:07 -05:00
ebolam
cd00373cfb
Deleted unused svg
2022-02-10 09:21:07 -05:00
henk717
e1ef4e4fa8
Merge pull request #74 from ebolam/united
...
Redo, Pinning, and docker enhancements
2022-02-07 01:06:36 +01:00
ebolam
c0bbe9f810
Reverted docker-cuda to mainline version.
2022-02-06 19:04:13 -05:00
ebolam
586b989582
Redo bug fix
2022-02-06 18:53:24 -05:00
ebolam
98609a8abc
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-02-06 13:48:34 -05:00
ebolam
80ae054cb5
Merge branch 'henk717:united' into united
2022-02-06 13:42:59 -05:00
ebolam
9e17ea9636
Fixed model downloading problem where models were downloaded multiple times
2022-02-06 13:42:46 -05:00
ebolam
8195360fcc
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-02-06 12:31:45 -05:00
henk717
7695eeb31a
Merge branch 'KoboldAI:main' into united
2022-02-06 18:06:07 +01:00