Commit Graph

4619 Commits

Author SHA1 Message Date
somebody
418f341560 Fix a/n depth being visually apart from a/n 2023-07-21 18:13:57 -05:00
somebody
560fb3bd2d Fix occasional action highlight issue 2023-07-21 18:08:21 -05:00
henk717
83e5c29260 Merge pull request #413 from one-some/bug-hunt
Fix WI comment editing
2023-07-22 00:34:46 +02:00
somebody
e68972a270 Fix WI comments 2023-07-21 16:14:13 -05:00
somebody
6e7b0794ea Context Menu: Fix for elements with a context-menu attribute but...
...without an entry in `context_menu_items`.
2023-07-21 15:40:07 -05:00
somebody
e5d0a597a1 Generation Mode: UNTIL_EOS
This mode enables the EOS token and will generate infinitely until
hitting it.
2023-07-21 15:36:32 -05:00
somebody
c78401bd12 Fix gen mode on first generation 2023-07-21 15:22:14 -05:00
somebody
8d5ae38b45 Context Menu: Show if gen mode is supported
- adds callback support to `enabledOn` in context menu items
- adds `supported_gen_modes` variable for frontend to check if a gen
  mode is supported
- adds `get_supported_gen_modes` to `InferenceModel` to get supported
  gen modes
- takes advantage of cool enum features for less enum-handling code
2023-07-21 14:29:41 -05:00
somebody
b8671cce09 Context Menu: Change positioning algorithm for y-axis 2023-07-21 13:48:23 -05:00
somebody
1c4157a41b Maybe another time
too many ideas at once
2023-07-21 13:33:38 -05:00
somebody
3a43b254b8 Add basic support for some of the quick stoppers 2023-07-21 13:27:30 -05:00
Henk
a17d7aae60 Easier english 2023-07-21 19:42:49 +02:00
Henk
da9b54ec1c Don't show API link during load 2023-07-21 19:31:38 +02:00
somebody
fa0a099943 Update comment 2023-07-21 10:38:17 -05:00
Henk
432cdc9a08 Fix models with good pad tokens 2023-07-21 16:39:58 +02:00
Henk
ec745d8b80 Dont accidentally block pad tokens 2023-07-21 16:25:32 +02:00
onesome
6cf63f781a YEAAAAAAAAAA 2023-07-21 01:58:57 -05:00
onesome
46c377b0c3 Context Menu: Add stubs for new temporary stoppingcriteria idea
I think this would be cool!

Ideas:
    - disable/grey when model doesnt support stopping criteria
    - shortcuts (maybe, this would def be a power user thing)
    - option to generate until EOS token
    - option to generate forever until user manually stops
    - (not super related but pixels away) make retry while generation is
        ongoing cancel generation and retry. same with undo.
2023-07-21 00:53:48 -05:00
onesome
4921040fb4 Context Menu: Make things a little less bloaty
5px was a bit excessive
TODO: studied the context menu in my browser for a bit and noticed that
if it was going to be too close to the bottom, the browser changes the
vertical direction the context menu goes. sounds neat!
2023-07-21 00:52:12 -05:00
onesome
34a98d2962 Context Menu: Small visual fixes
woohooooo back to css
- fixes margins to look better
- moves contents of context menu items 1px down
- fixes context menus near edge wrapping their inner text (ew)
2023-07-21 00:48:02 -05:00
somebody
4335d1f46a API: Fix /world_info 2023-07-19 13:18:45 -05:00
somebody
2d80f2ebb5 API: Fix getstorynums 2023-07-19 13:08:57 -05:00
somebody
9726d12ede API: Fix /story/end (POST) 2023-07-19 13:05:35 -05:00
somebody
6da7a9629a API: Fix /story/load 2023-07-19 13:01:07 -05:00
somebody
b9b3cd3aba API: Fix /story 2023-07-19 12:02:53 -05:00
somebody
813e210127 Bump tiny API version
As we're adding a new (though optional) parameter to load endpoint
2023-07-19 11:52:49 -05:00
somebody
fef42a6273 API: Fix loading 2023-07-19 11:52:39 -05:00
henk717
dc4404f29c Merge pull request #409 from nkpz/bnb8bit
Configurable quantization level, fix for broken toggles in model settings
2023-07-19 14:22:44 +02:00
Nick Perez
9581e51476 feature(load model): select control for quantization level 2023-07-19 07:58:12 -04:00
0cc4m
58908ab846 Revert aiserver.py changes 2023-07-19 07:14:03 +02:00
0cc4m
19f511dc9f Load GPTQ module from GPTQ repo docs 2023-07-19 07:12:37 +02:00
0cc4m
1c5da2bbf3 Move pip docs from KoboldAI into GPTQ repo 2023-07-19 07:08:39 +02:00
0cc4m
7516ecf00d Merge upstream changes, fix conflict 2023-07-19 07:02:29 +02:00
0cc4m
c84d063be8 Revert settings changes 2023-07-19 07:01:11 +02:00
0cc4m
9aa6c5fbbf Merge upstream changes, fix conflict, adapt backends to changes 2023-07-19 06:56:09 +02:00
Nick Perez
0142913060 8 bit toggle, fix for broken toggle values 2023-07-18 23:29:38 -04:00
Henk
22e7baec52 Permit CPU layers on 4-bit (Worse than GGML) 2023-07-18 21:44:34 +02:00
henk717
5f2600d338 Merge pull request #406 from ebolam/Model_Plugins
Clarified message on what's required for model backend parameters
2023-07-18 02:42:23 +02:00
ebolam
66192efdb7 Clarified message on what's required for model backend parameters in the command line 2023-07-17 20:30:41 -04:00
Henk
5bbcdc47da 4-bit on Colab 2023-07-18 01:48:01 +02:00
henk717
da9226fba5 Merge pull request #401 from ebolam/Model_Plugins
Save the 4-bit flag to the model settings.
2023-07-18 01:19:43 +02:00
henk717
fee79928c8 Merge pull request #404 from one-some/united
Delete basic 4bit
2023-07-18 01:19:14 +02:00
somebody
1637760fa1 Delete basic 4bit
And add code to handle dangling __pycache__s
2023-07-17 18:16:03 -05:00
henk717
5c3a8e295a Merge pull request #402 from one-some/united
Patches: Make lazyload work with quantization
2023-07-17 23:53:14 +02:00
somebody
23b95343bd Patches: Make lazyload work on quantized
i wanna watch youtube while my model is loading without locking up my
system >:(
2023-07-17 16:47:31 -05:00
ebolam
4acf9235db Merge branch 'Model_Plugins' of https://github.com/ebolam/KoboldAI into Model_Plugins 2023-07-17 09:52:10 -04:00
ebolam
b9ee6e336a Save the 4-bit flag to the model settings. 2023-07-17 09:50:03 -04:00
ebolam
66377fc09e Save the 4-bit flag to the model settings. 2023-07-17 09:48:01 -04:00
henk717
e8d84bb787 Merge pull request #400 from ebolam/Model_Plugins
missed the elif
2023-07-17 15:16:34 +02:00
ebolam
eafb699bbf missed the elif 2023-07-17 09:12:45 -04:00