885 Commits

Author SHA1 Message Date
somebody
e6656d68a1 Move probability visualization to after logitwarpers 2022-12-09 13:47:38 -06:00
Henk
2603f1fd5d Version bump 2022-11-20 16:22:33 +01:00
Henk
3084552c05 Sampler Order Fix for Models 2022-11-14 17:15:39 +01:00
Henk
13dff68de8 Sampler Order Loading Fix 2022-11-14 16:59:53 +01:00
Henk
a66e1443fd New Models 2022-11-12 16:54:40 +01:00
henk717
2e3a80b8ea
Merge branch 'KoboldAI:main' into united 2022-10-26 23:11:26 +02:00
vfbd
3233e78c56 Fix "is on the meta device" error when loading model with disk cache 2022-10-26 16:00:45 -04:00
henk717
351fb3c80b
Merge pull request #232 from VE-FORBRYDERNE/mkultra
Universal mkultra-based soft prompt tuner
2022-10-22 14:13:42 +02:00
henk717
f8be854e09
Merge branch 'KoboldAI:main' into united 2022-10-17 21:06:10 +02:00
vfbd
9ff50d81fd Fix regex for the prompt parameter of the POST /story/end endpoint 2022-10-17 14:36:23 -04:00
Llama
e5d0cc7b49 Fix exception thrown by kobold.modeltype in Lua
Fixes this exception:
  File "aiserver.py", line 3389, in lua_get_modeltype
    hidden_size = get_hidden_size_from_model(model)
NameError: name 'get_hidden_size_from_model' is not defined

The kobold.modeltype method eventually attempts to call
get_hidden_size_from_model in Python, but this method
was previously defined only within a local scope and so
is not visible from within lua_get_modeltype.  Since
get_hidden_size_from_model only accesses its model argument,
there is no reason not to make it a module-level method.

Also change the severity of several more Lua error logs to error.
2022-10-14 09:20:33 -07:00
Llama
8357c3e485 Merge branch 'united' into feature/anote-kwarg 2022-10-12 23:37:45 -07:00
Llama
4a01f345de Add include_anote kwarg to lua_compute_context.
Add an optional keyword argument to lua_compute_context to control
whether the author's note should be included in the context.  The
default value is true, so if the include_anote kwarg is not specified
then the author's note will be included, which was the default
behavior prior to this change.

Also update the Lua API documentation to describe this kwarg.
2022-10-12 23:18:19 -07:00
Henk
64715b18d6 Version bump 2022-10-12 14:54:11 +02:00
Henk
d5143eeb80 LUA Error as Error 2022-10-12 01:23:00 +02:00
vfbd
bdfa6d86b7 Seed has to be a 64-bit unsigned int or PyTorch will throw an error
tpu_mtj_backend's seed can be an integer of arbitrary size but we will
limit it to a 64-bit unsigned integer anyways for consistency.
2022-10-02 17:50:32 -04:00
vfbd
dd1c25241d Allow sampler seed and full determinism to be read/written in /config 2022-10-02 17:43:54 -04:00
vfbd
1a59a4acea Allow changing sampler seed and sampler order from API 2022-10-02 16:25:51 -04:00
scythe000
a482ec16d8
Update aiserver.py - typo fix
Changed 'beakmodel' to 'breakmodel' in the example comment.
2022-09-30 10:29:32 -07:00
Divided by Zer0
90022d05c8 fix endpoint for get_cluster_models 2022-09-30 00:26:55 +02:00
vfbd
6758d5b538 Merge branch 'united' into mkultra 2022-09-28 14:30:34 -04:00
ebolam
e7973e13ac Fix for GPT models downloading even when present in model folder 2022-09-28 12:47:50 -04:00
ebolam
72fc68c6e4 Fix for lazy loading on models after a non-lazy load model 2022-09-27 19:52:35 -04:00
ebolam
4aa842eada Merge commit 'refs/pull/180/head' of https://github.com/ebolam/KoboldAI into united 2022-09-27 19:29:05 -04:00
ebolam
be719a7e5e Fix for loading models that don't support breakmodel (GPU/CPU support in UI) 2022-09-27 19:02:37 -04:00
Henk
52e120c706 Disable breakmodel if we error on the check 2022-09-28 01:00:06 +02:00
henk717
d2ff32be32
Merge pull request #220 from ebolam/united
Fix for loading models on CPU only that don't support breakmodel
2022-09-28 00:46:37 +02:00
Henk
057ddb4fb2 Better --cpu handling 2022-09-28 00:45:17 +02:00
ebolam
168ae8083c Remove debug print 2022-09-27 18:30:20 -04:00
ebolam
0311cc215e Fix for loading models on CPU only that don't support breakmodel 2022-09-27 18:29:32 -04:00
ebolam
edd50fc809 Fix for GPT2 breakmodel in the UI 2022-09-27 17:58:51 -04:00
Henk
f1d63f61f3 Syntax fix 2022-09-27 22:43:09 +02:00
ebolam
908dc8ea60 Fix for older model loading 2022-09-27 15:59:56 -04:00
Henk
62921c4896 getmodelname for configname 2022-09-27 21:11:31 +02:00
Henk
60d09899ea Don't use Fast tokenizers when we don't have to 2022-09-27 18:26:13 +02:00
Henk
11455697ef Tokenizer Fixes (Slow first to keep coherency) 2022-09-27 17:57:18 +02:00
Henk
07896867b2 Revert Tokenizer Change 2022-09-27 15:36:08 +02:00
Henk
82a250aa1b Revert "Fix tokenizer selection code"
This reverts commit 7fba1fd28af0c50e7cea38ea0ee12ab48a3bebf7.
2022-09-27 15:33:08 +02:00
vfbd
79ae0f17ec Merge branch 'main' into merge 2022-09-26 16:10:10 -04:00
vfbd
7fba1fd28a Fix tokenizer selection code 2022-09-26 14:37:25 -04:00
henk717
c88f88f54d
Merge pull request #215 from ebolam/united
Update for horde selection to pull models automatically
2022-09-25 19:50:44 +02:00
ebolam
1f6861d55c Update for horde selection to pull models automatically (or on typing with a 1 second delay 2022-09-25 13:49:02 -04:00
Henk
d7ed577bf7 Don't stream in chatmode 2022-09-25 17:26:52 +02:00
Henk
465c1fd64d 1.19 version bump and polish 2022-09-25 17:00:33 +02:00
Henk
ba85ae4527 WI Improvements 2022-09-25 11:13:57 +02:00
Henk
d4b7705095 Merge branch 'main' into united 2022-09-24 22:14:32 +02:00
vfbd
819bd8a78d Fix saving of sharded HF models on non-Windows operating systems 2022-09-24 15:55:53 -04:00
Henk
c66657ef1b Flaskwebgui removal 2 2022-09-23 14:46:02 +02:00
Henk
557f7fc0fc Remove flaskwebgui (Conflicts with our threading) 2022-09-23 14:45:47 +02:00
Henk
2ec7e1c5da Continue if webbrowser fails to open 2022-09-23 14:24:57 +02:00