Henk
f1e4664d56
Dependency improvements
...
Adding psutil from conda to avoid the need for a compiler, finetuneanon should no longer be used. If people really want to use it they are on their own.
2022-11-11 21:13:51 +01:00
Henk
eb52ebd082
Merge branch 'main' into united
2022-11-03 00:22:30 +01:00
henk717
09b5ffc09d
Merge pull request #175 from VE-FORBRYDERNE/gptj-patch
...
Fix GPT-J model loading in TPU Colab when `vocab_size` is not divisible by 8
2022-11-03 00:13:50 +01:00
vfbd
b20d80ca2a
Add vocab padding to embedding bias in gptj.json
2022-11-02 19:02:09 -04:00
henk717
2e3a80b8ea
Merge branch 'KoboldAI:main' into united
2022-10-26 23:11:26 +02:00
henk717
7b5a766b4a
Merge pull request #172 from VE-FORBRYDERNE/accelerate-patch
...
Fix "is on the meta device" error when loading model with disk cache
2022-10-26 22:42:05 +02:00
vfbd
3233e78c56
Fix "is on the meta device" error when loading model with disk cache
2022-10-26 16:00:45 -04:00
Henk
442a9760b8
Hide V2 Saves
2022-10-23 19:03:18 +02:00
henk717
2300fb46ff
Merge branch 'KoboldAI:main' into united
2022-10-23 18:29:28 +02:00
Henk
8ee795055c
Force compatible HF Hub
2022-10-23 18:28:50 +02:00
Henk
ea8b50d31e
Conda fix for update script
2022-10-23 16:00:18 +02:00
Henk
0da404d4f8
Conda conflict fix
2022-10-23 14:10:44 +02:00
Henk
4699ded3ce
Tuner Dependencies
2022-10-22 19:00:06 +02:00
henk717
351fb3c80b
Merge pull request #232 from VE-FORBRYDERNE/mkultra
...
Universal mkultra-based soft prompt tuner
2022-10-22 14:13:42 +02:00
henk717
10a779d8c1
Merge pull request #231 from ebolam/united
...
Add parameter to Colab to use google drive
2022-10-22 14:13:32 +02:00
vfbd
f7b799be56
Apply tokenizer fixes to prompt_tuner.py
2022-10-21 17:06:17 -04:00
ebolam
d588dc0096
Check if dir exists before creating
2022-10-19 11:19:04 -04:00
ebolam
73865ba066
Add parameter to Colab for not using google drive (data would be ephemeral)
2022-10-19 11:05:17 -04:00
henk717
f8be854e09
Merge branch 'KoboldAI:main' into united
2022-10-17 21:06:10 +02:00
henk717
2795ced3a4
Merge pull request #168 from VE-FORBRYDERNE/api-patch
...
Fix regex for the prompt parameter of the POST /story/end endpoint
2022-10-17 20:38:34 +02:00
vfbd
9ff50d81fd
Fix regex for the prompt parameter of the POST /story/end endpoint
2022-10-17 14:36:23 -04:00
henk717
c6ed656a76
Merge pull request #230 from pi6am/fix/lua_kobold_modeltype
...
Fix/lua kobold modeltype
2022-10-14 19:50:19 +02:00
Llama
e5d0cc7b49
Fix exception thrown by kobold.modeltype in Lua
...
Fixes this exception:
File "aiserver.py", line 3389, in lua_get_modeltype
hidden_size = get_hidden_size_from_model(model)
NameError: name 'get_hidden_size_from_model' is not defined
The kobold.modeltype method eventually attempts to call
get_hidden_size_from_model in Python, but this method
was previously defined only within a local scope and so
is not visible from within lua_get_modeltype. Since
get_hidden_size_from_model only accesses its model argument,
there is no reason not to make it a module-level method.
Also change the severity of several more Lua error logs to error.
2022-10-14 09:20:33 -07:00
Llama
6eb3abbdb8
Merge pull request #2 from henk717/united
...
Merging henk717/united
2022-10-13 20:33:34 -07:00
henk717
fff7837a4a
Merge pull request #229 from pi6am/feature/anote-kwarg
...
Feature/anote kwarg
2022-10-13 23:04:46 +02:00
henk717
be5ffe763c
Merge pull request #228 from VE-FORBRYDERNE/transpose
...
Slightly decrease TPU loading times
2022-10-13 15:35:28 +02:00
Llama
8357c3e485
Merge branch 'united' into feature/anote-kwarg
2022-10-12 23:37:45 -07:00
Llama
05bcd3af11
Merge pull request #1 from henk717/united
...
Version bump
2022-10-12 23:32:25 -07:00
Llama
4a01f345de
Add include_anote kwarg to lua_compute_context.
...
Add an optional keyword argument to lua_compute_context to control
whether the author's note should be included in the context. The
default value is true, so if the include_anote kwarg is not specified
then the author's note will be included, which was the default
behavior prior to this change.
Also update the Lua API documentation to describe this kwarg.
2022-10-12 23:18:19 -07:00
vfbd
bdc73ef393
Decrease TPU loading times by eliminating a transpose operation
2022-10-12 14:31:18 -04:00
henk717
59e3a40496
Merge pull request #165 from henk717/united
...
1.19.1
2022-10-12 15:35:09 +02:00
Henk
64715b18d6
Version bump
2022-10-12 14:54:11 +02:00
Henk
d5143eeb80
LUA Error as Error
2022-10-12 01:23:00 +02:00
henk717
739cf0aae7
Merge pull request #227 from VE-FORBRYDERNE/pickle
...
Custom unpickler to avoid pickle's arbitrary code execution vulnerability
2022-10-07 02:12:53 +02:00
vfbd
323f593a96
Custom unpickler to avoid pickle's arbitrary code execution vulnerability
2022-10-06 20:08:08 -04:00
henk717
b85d74f22c
Merge branch 'KoboldAI:main' into united
2022-10-05 19:51:29 +02:00
henk717
9f18811ff9
Merge pull request #226 from VE-FORBRYDERNE/api-settings
...
Allow changing and reading sampler seed and sampler order from API
2022-10-04 20:30:25 +02:00
henk717
6af0e842f2
Switch to official
...
Switch to the official branch on KoboldAI now that it is compatible
2022-10-04 17:42:18 +02:00
henk717
cf3aebbd8f
Merge pull request #161 from henk717/united
...
Release 1.19
2022-10-04 15:57:47 +02:00
vfbd
bdfa6d86b7
Seed has to be a 64-bit unsigned int or PyTorch will throw an error
...
tpu_mtj_backend's seed can be an integer of arbitrary size but we will
limit it to a 64-bit unsigned integer anyways for consistency.
2022-10-02 17:50:32 -04:00
vfbd
dd1c25241d
Allow sampler seed and full determinism to be read/written in /config
2022-10-02 17:43:54 -04:00
vfbd
1a59a4acea
Allow changing sampler seed and sampler order from API
2022-10-02 16:25:51 -04:00
henk717
7bd3125f5a
Merge branch 'KoboldAI:main' into united
2022-10-01 16:59:46 +02:00
henk717
2f45b93119
GPU updates
2022-10-01 16:58:16 +02:00
henk717
e1606afc0d
GPU Descriptions
2022-10-01 15:43:45 +02:00
henk717
8313df8817
Localtunnel Default
2022-10-01 15:42:54 +02:00
henk717
9abad8bee9
Merge pull request #225 from scythe000/united
...
Update aiserver.py - typo fix
2022-09-30 19:30:46 +02:00
scythe000
a482ec16d8
Update aiserver.py - typo fix
...
Changed 'beakmodel' to 'breakmodel' in the example comment.
2022-09-30 10:29:32 -07:00
henk717
276c6f8e9e
Merge pull request #224 from db0/get_cluster_models_fix
...
fix endpoint for get_cluster_models
2022-09-30 00:30:18 +02:00
Divided by Zer0
90022d05c8
fix endpoint for get_cluster_models
2022-09-30 00:26:55 +02:00