8357c3e485
Merge branch 'united' into feature/anote-kwarg
2022-10-12 23:37:45 -07:00
4a01f345de
Add include_anote kwarg to lua_compute_context.
...
Add an optional keyword argument to lua_compute_context to control
whether the author's note should be included in the context. The
default value is true, so if the include_anote kwarg is not specified
then the author's note will be included, which was the default
behavior prior to this change.
Also update the Lua API documentation to describe this kwarg.
2022-10-12 23:18:19 -07:00
64715b18d6
Version bump
2022-10-12 14:54:11 +02:00
d5143eeb80
LUA Error as Error
2022-10-12 01:23:00 +02:00
bdfa6d86b7
Seed has to be a 64-bit unsigned int or PyTorch will throw an error
...
tpu_mtj_backend's seed can be an integer of arbitrary size but we will
limit it to a 64-bit unsigned integer anyways for consistency.
2022-10-02 17:50:32 -04:00
dd1c25241d
Allow sampler seed and full determinism to be read/written in /config
2022-10-02 17:43:54 -04:00
1a59a4acea
Allow changing sampler seed and sampler order from API
2022-10-02 16:25:51 -04:00
a482ec16d8
Update aiserver.py - typo fix
...
Changed 'beakmodel' to 'breakmodel' in the example comment.
2022-09-30 10:29:32 -07:00
90022d05c8
fix endpoint for get_cluster_models
2022-09-30 00:26:55 +02:00
e7973e13ac
Fix for GPT models downloading even when present in model folder
2022-09-28 12:47:50 -04:00
72fc68c6e4
Fix for lazy loading on models after a non-lazy load model
2022-09-27 19:52:35 -04:00
4aa842eada
Merge commit 'refs/pull/180/head' of https://github.com/ebolam/KoboldAI into united
2022-09-27 19:29:05 -04:00
be719a7e5e
Fix for loading models that don't support breakmodel (GPU/CPU support in UI)
2022-09-27 19:02:37 -04:00
52e120c706
Disable breakmodel if we error on the check
2022-09-28 01:00:06 +02:00
d2ff32be32
Merge pull request #220 from ebolam/united
...
Fix for loading models on CPU only that don't support breakmodel
2022-09-28 00:46:37 +02:00
057ddb4fb2
Better --cpu handling
2022-09-28 00:45:17 +02:00
168ae8083c
Remove debug print
2022-09-27 18:30:20 -04:00
0311cc215e
Fix for loading models on CPU only that don't support breakmodel
2022-09-27 18:29:32 -04:00
edd50fc809
Fix for GPT2 breakmodel in the UI
2022-09-27 17:58:51 -04:00
f1d63f61f3
Syntax fix
2022-09-27 22:43:09 +02:00
908dc8ea60
Fix for older model loading
2022-09-27 15:59:56 -04:00
62921c4896
getmodelname for configname
2022-09-27 21:11:31 +02:00
60d09899ea
Don't use Fast tokenizers when we don't have to
2022-09-27 18:26:13 +02:00
11455697ef
Tokenizer Fixes (Slow first to keep coherency)
2022-09-27 17:57:18 +02:00
07896867b2
Revert Tokenizer Change
2022-09-27 15:36:08 +02:00
82a250aa1b
Revert "Fix tokenizer selection code"
...
This reverts commit 7fba1fd28a
.
2022-09-27 15:33:08 +02:00
79ae0f17ec
Merge branch 'main' into merge
2022-09-26 16:10:10 -04:00
7fba1fd28a
Fix tokenizer selection code
2022-09-26 14:37:25 -04:00
c88f88f54d
Merge pull request #215 from ebolam/united
...
Update for horde selection to pull models automatically
2022-09-25 19:50:44 +02:00
1f6861d55c
Update for horde selection to pull models automatically (or on typing with a 1 second delay
2022-09-25 13:49:02 -04:00
d7ed577bf7
Don't stream in chatmode
2022-09-25 17:26:52 +02:00
465c1fd64d
1.19 version bump and polish
2022-09-25 17:00:33 +02:00
ba85ae4527
WI Improvements
2022-09-25 11:13:57 +02:00
d4b7705095
Merge branch 'main' into united
2022-09-24 22:14:32 +02:00
819bd8a78d
Fix saving of sharded HF models on non-Windows operating systems
2022-09-24 15:55:53 -04:00
c66657ef1b
Flaskwebgui removal 2
2022-09-23 14:46:02 +02:00
557f7fc0fc
Remove flaskwebgui (Conflicts with our threading)
2022-09-23 14:45:47 +02:00
2ec7e1c5da
Continue if webbrowser fails to open
2022-09-23 14:24:57 +02:00
e68f284006
show_budget fix
2022-09-21 17:53:16 +02:00
d0664207e8
Sync and save show budget
2022-09-21 17:39:56 +02:00
d1d23b5383
New Models
2022-09-20 18:12:54 +02:00
802758cf44
New models
2022-09-20 16:58:59 +02:00
ac0bc1c020
New models
2022-09-19 01:55:12 +02:00
4362ca4b34
fix previously saved settings overwriting new API key
2022-09-16 16:23:04 +02:00
9582722c2e
fix model loading format bleeding into gui
2022-09-16 00:08:59 +02:00
e8e0ad85be
Merge branch 'united' into dependency-fix
2022-09-15 17:41:34 -04:00
c288c39de7
Remove type hints from http_get
2022-09-15 17:39:32 -04:00
943614b5e6
Merge branch 'main' into dependency-fix
2022-09-15 17:33:48 -04:00
463bf86bcc
aria2_hook now uses new cache format if you have transformers 4.22
2022-09-15 16:50:43 -04:00
a75351668f
switched model retrieval and sendtocluster to loguru
2022-09-12 17:22:27 +02:00