ebolam
f3b55fdfed
Fix for V1 story loads
2022-09-30 16:32:37 -04:00
ebolam
2f35264153
Added flask-compress to speed up initial page load time
...
Added check for tokenizer not being loaded early enough and crashing javascript (not needed until user interaction)
Removed unneeded CSS and JS files from web page load
2022-09-30 15:20:34 -04:00
ebolam
bf58542ca5
Fix for story loading from V1 only pulling in first action
...
Added accessibility labels for better support with screen readers.
2022-09-30 14:05:28 -04:00
scythe000
a482ec16d8
Update aiserver.py - typo fix
...
Changed 'beakmodel' to 'breakmodel' in the example comment.
2022-09-30 10:29:32 -07:00
ebolam
efff4fb1a8
Fix for large UI1 story loads
2022-09-30 13:07:52 -04:00
ebolam
1e9be02770
Additional speed improvements
2022-09-30 08:34:37 -04:00
ebolam
6235be272e
More optimizations
2022-09-29 20:31:40 -04:00
Divided by Zer0
90022d05c8
fix endpoint for get_cluster_models
2022-09-30 00:26:55 +02:00
ebolam
096b61962c
Fix for load time.
...
Need world info loading testing as may have regression
2022-09-29 14:41:28 -04:00
ebolam
51b8b1d223
Partial fix for text context
2022-09-29 12:12:46 -04:00
ebolam
475d4bc48c
Debugging large story import
2022-09-29 08:12:15 -04:00
ebolam
c28ad27c1d
Merge commit 'refs/pull/181/head' of https://github.com/ebolam/KoboldAI into UI2
2022-09-28 12:59:53 -04:00
ebolam
e7973e13ac
Fix for GPT models downloading even when present in model folder
2022-09-28 12:47:50 -04:00
ebolam
72fc68c6e4
Fix for lazy loading on models after a non-lazy load model
2022-09-27 19:52:35 -04:00
ebolam
4aa842eada
Merge commit 'refs/pull/180/head' of https://github.com/ebolam/KoboldAI into united
2022-09-27 19:29:05 -04:00
ebolam
be719a7e5e
Fix for loading models that don't support breakmodel (GPU/CPU support in UI)
2022-09-27 19:02:37 -04:00
Henk
52e120c706
Disable breakmodel if we error on the check
2022-09-28 01:00:06 +02:00
henk717
d2ff32be32
Merge pull request #220 from ebolam/united
...
Fix for loading models on CPU only that don't support breakmodel
2022-09-28 00:46:37 +02:00
Henk
057ddb4fb2
Better --cpu handling
2022-09-28 00:45:17 +02:00
ebolam
168ae8083c
Remove debug print
2022-09-27 18:30:20 -04:00
ebolam
0311cc215e
Fix for loading models on CPU only that don't support breakmodel
2022-09-27 18:29:32 -04:00
ebolam
edd50fc809
Fix for GPT2 breakmodel in the UI
2022-09-27 17:58:51 -04:00
Henk
f1d63f61f3
Syntax fix
2022-09-27 22:43:09 +02:00
ebolam
908dc8ea60
Fix for older model loading
2022-09-27 15:59:56 -04:00
ebolam
2500233f20
Removed constant transmission of log to UI. Now it pulls last 100 lines when clicked.
2022-09-27 15:40:46 -04:00
Henk
62921c4896
getmodelname for configname
2022-09-27 21:11:31 +02:00
Henk
60d09899ea
Don't use Fast tokenizers when we don't have to
2022-09-27 18:26:13 +02:00
Henk
11455697ef
Tokenizer Fixes (Slow first to keep coherency)
2022-09-27 17:57:18 +02:00
Henk
07896867b2
Revert Tokenizer Change
2022-09-27 15:36:08 +02:00
Henk
82a250aa1b
Revert "Fix tokenizer selection code"
...
This reverts commit 7fba1fd28a
.
2022-09-27 15:33:08 +02:00
ebolam
3afd617cb4
Merge commit 'refs/pull/179/head' of https://github.com/ebolam/KoboldAI into UI2
2022-09-27 08:21:05 -04:00
ebolam
b840381aef
Logger update
2022-09-27 07:44:07 -04:00
vfbd
79ae0f17ec
Merge branch 'main' into merge
2022-09-26 16:10:10 -04:00
ebolam
2d4b4a4046
Fix for printouts of model loading and downloading
2022-09-26 15:30:20 -04:00
vfbd
7fba1fd28a
Fix tokenizer selection code
2022-09-26 14:37:25 -04:00
ebolam
a07ebad9cb
Speed fix and summarizing for stable diffusion down to 75 tokens
2022-09-26 13:14:15 -04:00
ebolam
68bf3cc7f0
WI fix
2022-09-26 12:55:19 -04:00
ebolam
98ab13b56a
Fix
2022-09-26 12:48:20 -04:00
ebolam
0adf6e3298
First stab at tab preservation in world info
2022-09-26 11:37:58 -04:00
ebolam
ce1829b8ce
Merge pull request #169 from one-some/ui2-inference-scratchpad
...
Inference Scratchpad / Finder Modes
2022-09-26 07:41:08 -04:00
somebody
2af4d94ea7
Add more scratchpad stuff
2022-09-25 22:08:38 -05:00
somebody
9651f3b327
Fix string generation in raw_generate
2022-09-25 22:08:13 -05:00
somebody
836ae9fda7
Fix generation bug with prompt shaving
...
Messed up any generations not coming from core_generate
2022-09-25 22:07:38 -05:00
ebolam
7f8ceddde5
Merge branch 'UI2' of https://github.com/ebolam/KoboldAI into UI2
2022-09-25 20:30:21 -04:00
ebolam
c9e3ea6300
disabled tpool again as it causes issues with colab tpus
2022-09-25 20:30:11 -04:00
ebolam
523376c206
Merge pull request #165 from one-some/ui2-speeeeeeeed
...
Add time info to generations in log
2022-09-25 20:13:45 -04:00
somebody
bd8658404b
Add time info to generations
2022-09-25 19:04:09 -05:00
ebolam
74faad94ac
Re-enabling tpool for tpu loading
2022-09-25 19:59:02 -04:00
ebolam
3db5a1f76a
Fix for TPU loading
2022-09-25 19:54:22 -04:00
ebolam
abe77b2189
Add log message
2022-09-25 19:27:12 -04:00