Henk
11280a6e66
LocalTunnel Linux Fix
2022-04-19 14:41:21 +02:00
Henk
b8e79afe5e
LocalTunnel support
2022-04-19 13:47:44 +02:00
Gnome Ann
c7b03398f6
Merge 'nolialsea/patch-1' into settings without Colab changes
2022-04-17 12:15:36 -04:00
henk717
372eb4c981
Merge pull request #119 from VE-FORBRYDERNE/scripting-sp
...
Allow userscripts to change the soft prompt
2022-04-14 21:33:20 +02:00
henk717
78d6ee491d
Merge pull request #117 from mrseeker/patch-7
...
Shinen FSD 13B (NSFW)
2022-04-14 21:33:08 +02:00
henk717
e180db88aa
Merge pull request #118 from VE-FORBRYDERNE/lazy-loader
...
Fix lazy loader in aiserver.py
2022-04-14 21:33:00 +02:00
Gnome Ann
bd6f7798b9
Fix lazy loader in aiserver.py
2022-04-14 14:33:10 -04:00
Julius ter Pelkwijk
ad94f6c01c
Shinen FSD 13B (NSFW)
2022-04-14 08:23:50 +02:00
Julius ter Pelkwijk
945c34e822
Shinen FSD 6.7B (NSFW)
2022-04-13 14:47:22 +02:00
Henk
eeff126df4
Memory Sizes
2022-04-13 12:41:21 +02:00
Gnome Ann
a3a52dc9c3
Add support for changing soft prompt from userscripts
2022-04-12 15:59:05 -04:00
Henk
26909e6cf3
Model Categories
2022-04-10 20:53:15 +02:00
Julius ter Pelkwijk
6fcb0af488
Adding Janeway 13B
2022-04-10 15:03:39 +02:00
Gnome Ann
359a0a1c99
Copy Python 3.6 compatible lazy loader to aiserver.py
2022-04-08 19:40:12 -04:00
Julius ter Pelkwijk
1974761f70
Releasing Janeway 6.7B
2022-04-08 08:13:36 +02:00
Wes Brown
09fee52abd
Add `num_seqs` support to GooseAI/OpenAI client handler.
2022-04-07 14:50:23 -04:00
Henky!!
5feda462fb
OAI - Fixes last commit
2022-04-07 02:39:37 +02:00
Henky!!
34b6c907f0
OAI Max Token Slider
2022-04-07 02:26:15 +02:00
Henky!!
b568e31381
OAI Path Support
2022-04-06 05:15:25 +02:00
Henky!!
699b3fc10b
OAI Redo Fixes 2
2022-04-06 04:54:27 +02:00
Henky!!
b5a633e69b
OAI Redo Fix
2022-04-06 04:45:01 +02:00
henk717
ee682702ee
Merge branch 'KoboldAI:main' into united
2022-04-05 01:35:22 +02:00
Henky!!
8153f21d5c
Convo 6B
2022-04-05 01:33:51 +02:00
Henky!!
e644963564
OpenAI Fixes
2022-03-28 02:02:37 +02:00
Gnome Ann
20e48b11d7
Typical sampling
2022-03-27 16:25:50 -04:00
Noli
aa8de64aa4
fix default port
2022-03-25 23:26:27 +01:00
Noli
3e003d3b42
add port to the command options
2022-03-25 22:18:28 +01:00
Gnome Ann
0348970b19
Make sure AI is not busy when using retry to regenerate random story
2022-03-23 22:09:35 -04:00
Gnome Ann
4832dd6f37
Allow regenerating random story using Retry button
...
Commit b55e5a8e0b
removed this feature, so
this commit adds it back.
2022-03-23 13:39:46 -04:00
henk717
cf99f02ca5
Merge branch 'main' into united
2022-03-20 19:22:53 +01:00
henk717
20eab085dd
Fix AutoSave Toggle
2022-03-20 19:12:11 +01:00
henk717
5c795609e4
KML Fix
2022-03-20 13:10:56 +01:00
Gnome Ann
b1125a6705
Add EOS and padding token to default NeoX badwords
2022-03-19 01:30:02 -04:00
Gnome Ann
85a4959efa
Merge branch 'united' into neox
2022-03-18 11:19:03 -04:00
henk717
a3e5e052b3
Newer umamba + slope tweak
2022-03-16 18:34:02 +01:00
Gnome Ann
95c4251db9
Print two newlines before loading HF models
2022-03-15 13:58:53 -04:00
Gnome Ann
9dc48b15f0
Add custom badwords and pad token ID for GPT-NeoX
2022-03-14 23:31:49 -04:00
Gnome Ann
88f247d535
GPT-NeoX-20B support in Colab TPU instances
2022-03-14 23:14:20 -04:00
henk717
4892556059
Model saving for colab mode
2022-03-13 11:22:44 +01:00
Gnome Ann
2b8c46338e
Change current working directory to KoboldAI folder
2022-03-13 01:22:11 -05:00
ebolam
8ae0a4a3e7
Online Services Working now (without a way to test as I don't have accounts)
2022-03-12 14:21:11 -05:00
ebolam
b55e5a8e0b
Retry Bug Fix
2022-03-12 10:32:27 -05:00
ebolam
ae854bab3d
Fix for retry causing issues for future redo actions
2022-03-11 11:40:55 -05:00
ebolam
772ae2eb80
Added model info to show model load progress in UI
2022-03-11 11:31:41 -05:00
henk717
b02d5e8696
Allows missing model_config again
2022-03-10 19:59:10 +01:00
henk717
172a548fa1
Fallback to generic GPT2 Tokenizer
2022-03-10 19:52:15 +01:00
henk717
9dee9b5c6d
Ignore incorrect problems
2022-03-09 12:03:37 +01:00
henk717
a28e553412
Remove unused gettokenids
2022-03-09 11:59:33 +01:00
ebolam
0943926f6a
Fix for lazy loading
2022-03-07 19:52:44 -05:00
ebolam
bfc07073e3
layer count fix
2022-03-07 19:33:24 -05:00
ebolam
d8ab58892d
saved layer value fix
2022-03-07 19:21:55 -05:00
ebolam
da53d7edb3
Custom Path Load fix
2022-03-07 18:54:11 -05:00
ebolam
d1a64e25da
Custom Model Load Fix
2022-03-07 18:44:37 -05:00
ebolam
70f1c2da9c
Added stub for model name feedback
2022-03-07 14:20:25 -05:00
ebolam
d0553779ab
Bug Fix
2022-03-07 12:33:35 -05:00
ebolam
c50fe77a7d
Load Fix
2022-03-07 11:57:33 -05:00
ebolam
49fc854e55
Added saving of breakmodel values so that it defaults to it on next load
2022-03-07 11:49:34 -05:00
ebolam
2cf6b6e650
Merge branch 'henk717:united' into united
2022-03-07 11:31:14 -05:00
ebolam
123cd45b0e
Breakmodel working now with the web UI
2022-03-07 11:27:23 -05:00
henk717
7434c9221b
Expand OAI Setting Compatibility
2022-03-07 08:56:47 +01:00
ebolam
5e00f7daf0
Next evolution of web ui model selection. Custom Paths not working quite right.
2022-03-06 20:55:11 -05:00
ebolam
2ddf45141b
Initial UI based model loading. Includes all parameters except breakmodel chunks, engine # for OAI, and url for ngrok url for google colab
2022-03-06 19:51:35 -05:00
ebolam
f6c95f18fa
Fix for Redo ( #94 )
...
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)
* Removed debug code
2022-03-06 23:18:14 +01:00
henk717
f857696224
OAI ConfigName Bugfix
2022-03-06 20:18:42 +01:00
henk717
3ddc9647eb
Basic GooseAI Support
2022-03-06 20:10:30 +01:00
henk717
daea4b8d15
Fix Breakmodel RAM Regression
2022-03-06 08:26:50 +01:00
henk717
105d3831b5
Lazy Load Float32 for CPU
2022-03-06 07:56:04 +01:00
Gnome Ann
373f7b9bd5
Don't convert tensors to float16 if using CPU-only mode
2022-03-05 14:30:26 -05:00
Gnome Ann
579e85820c
Resolve merge conflict
2022-03-05 14:13:56 -05:00
Gnome Ann
2e19ea1bb6
Auto detect if we're in a Colab TPU instance
2022-03-05 14:07:23 -05:00
ebolam
4a8d7f5e0b
Merge branch 'henk717:united' into united
2022-03-05 13:25:10 -05:00
Gnome Ann
0a258a6282
Support for loading HF models on TPU with `--colab_tpu`
2022-03-05 12:33:33 -05:00
Gnome Ann
86ac562b0c
Lazy loader should convert model tensors to float16 before moving them
2022-03-05 11:31:34 -05:00
ebolam
4dd119c38d
Redo no longer goes through formatting function (thereby getting changed)
2022-03-05 11:15:33 -05:00
ebolam
353817b4da
Remove debug print statements
2022-03-05 10:35:06 -05:00
ebolam
221f264fa7
Redo fix. Fix for actions structure to not error out when asking for next_id when the actions list is empty.
2022-03-05 10:31:28 -05:00
Gnome Ann
a00dede610
Put the XGLM embedding patch behind a version check
2022-03-04 19:10:15 -05:00
Gnome Ann
5674516f0c
Merge branch 'united' into lazy-loader
2022-03-04 18:27:51 -05:00
ebolam
5f92cbc231
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-03-04 15:37:34 -05:00
ebolam
321f45ccad
Fix debug to never crash (would on some initialization steps)
2022-03-04 15:36:13 -05:00
ebolam
ee883fc4da
Merge branch 'henk717:united' into united
2022-03-04 14:15:16 -05:00
ebolam
26b9268391
Redo bug fix
2022-03-04 14:14:44 -05:00
henk717
eb247d69c3
Merge branch 'KoboldAI:main' into united
2022-03-04 18:24:56 +01:00
Gnome Ann
a1fedca2c8
Use lazy loading automatically if a config file exists for the model
2022-03-04 11:11:33 -05:00
MrReplikant
ae143e896c
Fixed unnecessary spacing in chatmode
...
This makes it go from "john :" to "John:", as it's supposed to be. As simple as it is, it can easily throw a chatbot model for a loop.
2022-03-04 08:46:00 -06:00
Gnome Ann
f0629958b1
Merge branch 'united' into lazy-loader
2022-03-04 00:37:25 -05:00
Gnome Ann
58a2c18821
Add lazy torch loading support to transformers backend
2022-03-04 00:33:10 -05:00
henk717
e033b04f87
Restore United
2022-03-02 11:40:50 +01:00
henk717
f9ac23ba4e
Add Janeway and Shinen
2022-03-02 09:51:25 +01:00
ebolam
3f73f84b69
bug fix
2022-02-28 19:04:12 -05:00
ebolam
6003b2369b
Debug and load story fix for actions_metadata variable
2022-02-28 10:39:36 -05:00
ebolam
47d102635e
Merge branch 'united' into united
2022-02-28 08:37:45 -05:00
ebolam
7803fbb137
Fixed error in redo action when editing previous entries and/or editing right after a redo
2022-02-28 08:31:26 -05:00
henk717
13fe472264
Menu Polish
2022-02-28 02:47:15 +01:00
henk717
f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
...
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
henk717
4849a30d88
Merge pull request #84 from mrseeker/patch-3
...
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
henk717
a466e13c00
Model List Support
2022-02-26 12:34:07 +01:00
Gnome Ann
a22d59e191
Fix a bug with soft prompts when using transformers XGLM
2022-02-25 12:35:23 -05:00
Julius ter Pelkwijk
0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
...
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
Gnome Ann
072ca87977
Load soft prompt at the end instead of inside `loadsettings()`
2022-02-23 21:15:08 -05:00
Gnome Ann
8120e4dfa2
Need to set `vars.allowsp` to True before calling `loadsettings()`
2022-02-23 21:09:31 -05:00
Gnome Ann
c45ba497c9
Load settings earlier to avoid TPU badwords issues
2022-02-23 20:39:11 -05:00
henk717
ac59e55d62
Smaller optimizations
2022-02-24 01:14:26 +01:00
henk717
8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
...
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
Gnome Ann
ad10ac8871
Allow TPU models to specify settings/config in config.json
2022-02-23 18:22:18 -05:00
henk717
7de3311000
Fix sentencepiece model saving
2022-02-23 22:04:41 +01:00
henk717
fd7ba9f70e
Also check for Config in models/
2022-02-22 19:22:08 +01:00
henk717
4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
...
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
henk717
300db651de
Open models folder by default
2022-02-21 00:46:18 +01:00
Gnome Ann
da10e2dc1d
Don't crash if `XGLMSinusoidalPositionalEmbedding` doesn't exist
2022-02-20 17:41:00 -05:00
Gnome Ann
5dc4969173
Temporary fix for XGLM positional embedding issues
2022-02-20 14:17:24 -05:00
Gnome Ann
a63fa3b067
Prevent transformers XGLM from stopping generation on `</s>` token
2022-02-19 23:15:16 -05:00
henk717
a47e93cee7
Seperate Low Memory Mode
...
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00
henk717
8e03f1c612
Merge branch 'KoboldAI:main' into united
2022-02-18 14:21:34 +01:00
henk717
f06acb59be
Add the Janeway model
...
New model released by Mr.Seeker
2022-02-18 14:18:41 +01:00
henk717
cba93e29d2
Update aiserver.py
2022-02-18 02:11:08 +01:00
henk717
76a6c124dd
Quiet on Colab
...
Makes the Colab mode also automatically activate the Quiet mode to improve privacy. We should no longer need this in the colab console thanks to the redo feature. Need something different for testing? Use --remote instead.
2022-02-18 02:07:40 +01:00
henk717
02246dfc4d
Remote play improvements
...
Change the proposed --share to --unblock to make it more apparent what this feature does. The feature unblocks the port from external access, but does not add remote play support. For remote play support without a proxy service I have added --host .
2022-02-18 01:08:12 +01:00
Gnome Ann
ec54bc9d9b
Fix typo in `send_debug()`
2022-02-12 20:11:35 -05:00
Gnome Ann
f682c1229a
Fix fairseq newline handling issues
2022-02-12 13:23:59 -05:00
ebolam
633152ee84
Fixed Retry bug due to redo/pin code
2022-02-10 10:01:07 -05:00
ebolam
586b989582
Redo bug fix
2022-02-06 18:53:24 -05:00
ebolam
98609a8abc
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-02-06 13:48:34 -05:00
ebolam
80ae054cb5
Merge branch 'henk717:united' into united
2022-02-06 13:42:59 -05:00
ebolam
9e17ea9636
Fixed model downloading problem where models were downloaded multiple times
2022-02-06 13:42:46 -05:00
henk717
c38108d818
Merge pull request #73 from VE-FORBRYDERNE/xglm-breakmodel
...
Breakmodel support for the fairseq models
2022-02-06 18:05:59 +01:00
ebolam
02c7ca3e84
Merge branch 'henk717:united' into united
2022-02-03 08:11:06 -05:00
ebolam
0684a221cd
Changed pin icon for re-dos to be a circular arrow that is not clickable to make it clear it is a redo action and cannot be cleared.
2022-02-03 08:08:43 -05:00
henk717
3ee63b28c5
Defaults and Downloads
...
Default settings for the new repetition penalty settings (Better suggestions very much welcome since broader community testing has not been done).
Updated the Readme with the link to the offline installer.
2022-02-03 13:13:26 +01:00
Gnome Ann
4904af6adc
Fix a mistake in the previous commit
2022-02-02 23:04:59 -05:00
Gnome Ann
78f52063c7
Fix XGLM soft prompts
2022-02-02 22:45:16 -05:00
Ben Fox
e2d2ebcae6
upstream merge
2022-02-02 15:04:59 -05:00
Gnome Ann
d847d04605
Fix some typos in XGLM breakmodel
2022-02-01 16:00:46 -05:00
Gnome Ann
8e1169ea61
Enable `vars.bmsupported` when using XGLM
2022-02-01 15:31:59 -05:00
Gnome Ann
e7f65cee09
XGLM breakmodel
2022-02-01 13:04:35 -05:00
henk717
c14e6fe5d2
Revert parralism
...
Testing is done, seems to cause issues in the order things happen with the interface.
2022-02-01 18:58:48 +01:00
henk717
d68a91ecd3
Save model values
...
Without saving these they get lost after someone saves. So saving them is more important than the model being able to override them after the fact.
2022-02-01 18:37:52 +01:00
henk717
b8e08cdd63
Enable Tokenizer Parralism
...
Has proven to be safe in my internal testing and does help with the interface lag at boot.
Enabling this so it can get wider testing.
2022-02-01 12:00:53 +01:00
henk717
ecd7b328ec
Further Polishing
...
Multiple smaller changes to get 1.17 in shape for its release.
2022-02-01 11:15:44 +01:00
henk717
36b6dcb641
Increase newlinemode compatibility
...
Ran into issues with other modes like chatmode and adventure, moved it further down the pipeline and converting </s> back to \n before processing additional formatting.
Still has an issue with the html formatting not working, but at least the AI works now.
2022-01-31 19:39:32 +01:00
henk717
90fd67fd16
Update aiserver.py
2022-01-31 19:06:02 +01:00
henk717
b69e3f86e1
Update aiserver.py
...
Removes a debug line
2022-01-31 18:57:47 +01:00
henk717
8466068267
Don't save newlinemode
...
On second thought, it is probably better to not save this. Advanced users can add this themselves and that way newer versions of the model can override it if redownloaded.
2022-01-31 18:41:23 +01:00
henk717
729be62821
</s> new line mode
...
Needed for Fairseq and XGLM models that do not understand the regular \n .
2022-01-31 18:39:34 +01:00
henk717
03433810f1
KML improvements
...
Don't parse > since that has a different meaning for us, also whitelisting a few more markdown tags so lists work.
2022-01-30 20:07:47 +01:00
henk717
a484244392
Welcome Message API
...
Allows model creators to customize the welcome message using Markdown and Limited HTML
Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
2022-01-30 19:47:30 +01:00
henk717
ddfa21e6dd
Breakmodel Fixes
...
Multiple old references and one mistake in my last commit fixed
2022-01-30 17:40:43 +01:00
henk717
57344935f6
--model without breakmodel disables bmsupported
...
Last commit it only did a warning, now it will turn bmsupported off so that the GPU routine is used.
2022-01-30 17:16:35 +01:00
henk717
f0c0a990ea
NoBreakmodel variable
...
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).
In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
2022-01-30 17:06:15 +01:00
henk717
5b5a479f29
Threading + Memory Sizes
...
Polish effort to suppress a warning and list more accurate VRAM as tested with the full 2048 max tokens.
2022-01-30 13:56:25 +01:00
henk717
fca7f8659f
Badwords unification
...
TPU's no longer use hardcoded badwords but instead use the var
2022-01-29 18:09:53 +01:00
henk717
f9f25c01e4
HTML escape the last commit
...
</s> didn't work, needed to be HTML escaped (Thanks for the tip VE!)
2022-01-28 19:21:05 +01:00
henk717
be0e57185f
Improved Model Support
...
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.
More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
2022-01-28 18:03:30 +01:00
ebolam
1470b1666d
Fixed single gen redo
2022-01-27 20:17:13 -05:00
ebolam
2278b7c103
Changed behavior of redo if there is only 1 option to just select it
2022-01-26 21:07:55 -05:00
ebolam
06bbe429d9
Bug fix for redo/pinning persisting over new game requests
2022-01-26 21:02:36 -05:00
ebolam
b0f1bdf2fd
Merge branch 'henk717:united' into united
2022-01-26 11:27:12 -05:00
henk717
987e78f980
More loading fixes
...
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
2022-01-25 06:39:23 +01:00
Gnome Ann
3f18888eec
Repetition penalty slope and range
2022-01-24 15:30:38 -05:00
ebolam
bd0732fbd6
Fix for redo with options.
...
Added debug menu
2022-01-24 12:54:44 -05:00
ebolam
47ec22873d
bug-fix if settings directory is a symlink.
2022-01-22 21:43:32 -05:00
ebolam
f54f46b068
bugfix for metadata saving
2022-01-22 20:30:14 -05:00
henk717
0846d57368
0.17 polish
2022-01-23 01:05:09 +01:00
ebolam
bdd358f40f
Merge branch 'united' of https://github.com/henk717/KoboldAI into henk717-united
2022-01-22 17:57:33 -05:00
henk717
c9999b6388
Merge pull request #70 from VE-FORBRYDERNE/patch
...
Don't throw an error in `update_story_chunk` if you try to edit a nonexistent chunk
2022-01-22 23:24:34 +01:00
henk717
4e7440804c
Merge pull request #69 from VE-FORBRYDERNE/lua
...
Lua compatibility enhancements
2022-01-22 23:23:47 +01:00
henk717
f79db7059a
Fall back to old json load
...
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.
2022-01-22 23:21:19 +01:00
ebolam
9df758c1f4
added quiet option to suppress any story text from showing in the console (reduce logs when running in a docker container)
2022-01-22 15:30:56 -05:00
ebolam
12e7b6d10b
Added --share command line parameter so we can set host=0.0.0.0 on local instances without editing code
...
moved save location of downloaded models to models/XXXXXX so we can more easily set this as a volume in docker
2022-01-22 14:47:28 -05:00
Gnome Ann
bf2b02d366
Don't error in `update_story_chunk` if chunk index doesn't exist
2022-01-21 21:19:32 -05:00
ebolam
2010e7b9bc
Added saveas option for saving without metadata information
...
Fixed redo on an empty story erroring
Fixed redo when you're at the current end of a chain causing an error
2022-01-21 19:02:56 -05:00
Gnome Ann
fab0913270
Call `setgamesaved(False)` in `update_story_chunk` and `remove_story_chunk`
2022-01-21 16:39:51 -05:00
ebolam
d31fb278ce
Working redo and pin options
2022-01-21 15:30:37 -05:00
ebolam
fcaacf636d
Merge branch 'henk717:united' into united
2022-01-21 07:40:25 -05:00
Ben Fox
03d54364f4
Initial commit of the actions metadata variable population
2022-01-20 15:18:43 -05:00
Gnome Ann
72a7aac2c7
Sync memory properly after random game request
2022-01-20 15:14:55 -05:00
ebolam
dffd00265b
Added autosave feature. When action is submitted it will save if the save setting is on and if the filename is set.
2022-01-20 07:46:34 -05:00
henk717
9532b56cb8
Universal Model Settings
...
No longer depends on a local config file enabling the configuration to work in --colab mode.
2022-01-20 10:11:11 +01:00
Gnome Ann
c703729f0b
Set eventlet threadpool size back to 1
2022-01-20 02:10:57 -05:00
Gnome Ann
f0c39c004a
Deleting world info entries should call `setgamesaved(False)`
2022-01-18 19:36:20 -05:00
henk717
4ca06ebcf3
Merge pull request #65 from VE-FORBRYDERNE/sp
...
Show author and SP length in soft prompt menu
2022-01-18 23:51:02 +01:00
henk717
1e0f9ada08
Add adventure 2.7B
...
Its on Huggingface now, so lets add it to the menu!
2022-01-18 23:50:21 +01:00
Gnome Ann
3018322963
Detect and show properly when story is unsaved
2022-01-18 17:20:45 -05:00
Gnome Ann
1951ccd2ce
Show author and SP length in soft prompt menu
2022-01-18 16:30:09 -05:00
Gnome Ann
4da1a2d247
Prevent tokenizer from taking extra time the first time it's used
2022-01-17 22:55:25 -05:00
Gnome Ann
703c092577
Fix settings callback, and `genout.shape[-1]` in `tpumtjgenerate()`
2022-01-17 14:52:29 -05:00
Gnome Ann
3ba0e3f9d9
Dynamic TPU backend should support dynamic warpers and abort button
2022-01-17 14:10:32 -05:00
Gnome Ann
6502af086f
Use `vars._actions` in `tpumtjgenerate` and its callbacks
2022-01-17 13:24:11 -05:00
Gnome Ann
45bfde8d5d
`generated_cols` needs to be set properly by TPU static backend
2022-01-17 13:19:57 -05:00
Gnome Ann
9594b2db1c
Fix soft prompt length calculation in `calcsubmitbudget()`
...
In TPU instances, `vars.sp.shape[0]` is not always the actual number of
tokens in the soft prompt. We have to use `vars.sp_length` to get an
accurate token count.
2022-01-17 13:17:20 -05:00
Gnome Ann
74f79081d1
Use `vars.model_type` to check for GPT-2 models
2022-01-17 13:13:54 -05:00
Gnome Ann
54a587d6a3
Show confirmation dialog when navigating away from UI window
2022-01-17 12:11:06 -05:00
Gnome Ann
1627afa8c5
Merge branch 'united' into patch
2022-01-17 10:44:34 -05:00
Gnome Ann
33f9f2dc82
Show message when TPU backend is compiling
2022-01-16 21:09:10 -05:00
Gnome Ann
03b16ed920
Merge branch 'united' into patch
2022-01-16 00:36:55 -05:00
Gnome Ann
4f0c8b6552
Merge branch 'united' into xmap
2022-01-15 23:32:12 -05:00
Gnome Ann
f4eb896a69
Use original TPU backend if possible
2022-01-15 23:31:07 -05:00
henk717
9802d041aa
Colab Optimizations
...
Breakmodel is useless on Colab, so for the sake of efficiency if --colab is present we will always assume a model is incompatible. The same applies to the conversion, colab's are discarded so converting the model to a .bin file only wastes time since the HDD isn't fast. Finally we automatically set all the useful variables for Colab, so that in the future this can be removed from ckds and other scripts.
Lastly ckds has been adapted not to copy the examples folder and to add the new --colab parameter.
Local players are much better off running the old --remote command.
2022-01-16 00:56:03 +01:00
henk717
9bcc24c07e
Merge pull request #58 from VE-FORBRYDERNE/xmap
...
Dynamic TPU backend xmaps
2022-01-15 16:20:58 +01:00
Gnome Ann
877fa39b8a
Change TPU regeneration indicator message
2022-01-14 23:21:27 -05:00
Gnome Ann
bdfde33e8a
Add an indicator for when dynamic WI scan is triggered in TPU Colabs
2022-01-14 23:13:55 -05:00
Gnome Ann
e0fdce2cc6
Fix TPU generation modifier
2022-01-14 23:00:06 -05:00
Gnome Ann
932c393d6a
Add TPU support for dynamic WI scan and generation modifiers
2022-01-14 21:39:02 -05:00
Gnome Ann
cf9a4b7e6b
Fix typos in error messages
2022-01-13 22:33:55 -05:00
henk717
53b91c6406
Small changes
2022-01-14 02:03:46 +01:00
Gnome Ann
a3d6dc93e8
xmaps for moving things onto TPU
2022-01-12 21:45:30 -05:00
Gnome Ann
f0b5cc137f
Merge branch 'united' into patch
2022-01-12 19:50:01 -05:00
henk717
49e2bcab1a
Allow unique chatnames in multiplayer
...
No longer update the chatname outside of the config, this will not effect singleplayer tab at all, but it will allow people in multiplayer to chat with their own names.
2022-01-11 21:31:44 +01:00
henk717
3f88b4f840
Server clarification
...
To prevent confusion with users who have not used KoboldAI for a while, or who are following old tutorials I have added a disclaimer that informs people that most Colab links should not be used with this feature and instead opened in the browser.
2022-01-11 00:35:20 +01:00
henk717
d2947bd1cc
Small model description update
2022-01-11 00:29:35 +01:00
Gnome Ann
43586c8f60
Fix some of the logic for generation aborting
2022-01-10 17:09:47 -05:00
Gnome Ann
902b6b0cee
Always save all world info entries
2022-01-10 16:36:36 -05:00
Gnome Ann
f718fdf65e
Allow pressing send button again to stop generation
2022-01-10 16:36:15 -05:00
Gnome Ann
c84d864021
Fix a bug with dynamic WI scan when using a soft prompt
...
The problem was that when a soft prompt is being used, the dynamic
scanning criteria searches a different set of tokens for world info
keys than the `_generate()` function, which results in generation loops
when a world info key appears in the former set of tokens but not the
latter.
2022-01-10 15:52:49 -05:00
Gnome Ann
fbc3a73c0f
Compile TPU backend in background
2022-01-07 13:47:21 -05:00
Gnome Ann
01479c29ea
Fix the type hint for `bridged_kwarg` decorator
2022-01-04 20:48:34 -05:00
Gnome Ann
fc6caa0df0
Easier method of adding kwargs to bridged in aiserver.py
2022-01-04 19:36:21 -05:00
Gnome Ann
fbf5062074
Add option to `compute_context()` to not scan story
2022-01-04 19:26:59 -05:00
Gnome Ann
6edc6387f4
Accept command line arguments in `KOBOLDAI_ARGS` environment var
...
So that you can use gunicorn or whatever with command-line arguments by
passing the arguments in an environment variable.
2022-01-04 17:11:14 -05:00
Gnome Ann
aa86c6001c
`--breakmodel_gpublocks` should handle -1 properly now
2022-01-04 14:43:37 -05:00
Gnome Ann
e20452ddd8
Retrying random story generation now also remembers memory
2022-01-04 14:40:10 -05:00
Gnome Ann
f46ebd2359
Always pass 1.1 as repetition penalty to generator
...
The `dynamic_processor_wrap` makes it so that the repetition penalty is
read directly from `vars`, but this only works if the initial repetition
sent to `generator` is not equal to 1. So we are now forcing the initial
repetition penalty to be something other than 1.
2022-01-04 14:18:58 -05:00
Gnome Ann
63bb76b073
Make sure `vars.wifolders_u` is set up properly on loading a save
2022-01-04 14:13:36 -05:00
Gnome Ann
b88d49e359
Make all WI commands use UIDs instead of nums
2021-12-31 21:22:51 -05:00
Gnome Ann
ccfafe4f0a
Lua API fixes for deleting/editing story chunks
2021-12-31 18:28:03 -05:00
Gnome Ann
7241188408
Make sure tokenizer is initialized when used in read-only mode
2021-12-31 17:13:11 -05:00
henk717
796c71b7f7
ANTemplate in Model Configs
...
This commit exposes antemplates to the model config, this lets authors specify what kind of authors notes template they would like to use for their model. Users can still change it if they desire.
2021-12-31 00:11:18 +01:00
henk717
455dbd503b
Merge pull request #52 from VE-FORBRYDERNE/memory
...
Random story persist, author's note template and changes to behaviour when not connected to server
2021-12-30 23:57:55 +01:00
henk717
557d062381
Finetuneanon Models
...
Uploaded with permission, so now Finetuneanon's models can be added to the main menu
2021-12-30 14:16:04 +01:00
Gnome Ann
4d06ebb45a
Consistent capitalization of "Author's note"
2021-12-30 01:48:25 -05:00
Gnome Ann
de8a5046df
Make sure we don't keep the trimmed memory in `randomGameRequest()`
2021-12-30 01:45:27 -05:00
Gnome Ann
276f24029e
Author's Note Template
2021-12-29 23:43:36 -05:00
Gnome Ann
7573f64bf2
Add Memory box to Random Story dialog and "Random Story Persist"
2021-12-29 23:15:59 -05:00
Gnome Ann
8e2e3baed5
Fix AI output text flash showing up on wrong chunk
2021-12-29 14:23:22 -05:00
henk717
7a4834b8d0
Chatname Fix
...
Sends the chatname to the client
2021-12-27 18:52:06 +01:00
henk717
88f6e8ca38
Chatmode improvements
...
Blank lines appear often in chatmode so it is best played with blank line removal turned on, this is now forced. Its not compatible with Adventure mode, so they now turn each other off.
2021-12-27 13:32:25 +01:00
henk717
bbd68020a5
Merge pull request #50 from VE-FORBRYDERNE/potluck
...
Chat mode GUI, and Lua and random story generator bug fixes
2021-12-27 11:20:20 +01:00
Gnome Ann
a4087b93e9
Fix random story retry, for real this time
2021-12-26 22:51:07 -05:00
Gnome Ann
1189781eac
Show a text box for chat name when Chat Mode is enabled
2021-12-26 22:21:58 -05:00
henk717
5b22b0f344
Update aiserver.py
2021-12-27 02:44:36 +01:00
henk717
4d7f222758
Update aiserver.py
2021-12-27 02:32:59 +01:00
henk717
6d1bf76ef1
Path Fixes
...
Fixes the tokenizer cache being hit when we already have a local model
2021-12-27 01:56:59 +01:00
Gnome Ann
9288a3de2f
Allow retry button to regenerate random story
2021-12-26 19:52:56 -05:00
Gnome Ann
1ff563ebda
Fix random story generator when No Prompt Gen is enabled
2021-12-26 19:40:20 -05:00
henk717
1a64f8bdc4
More Models
...
Added more models in the menu, all the popular community models are now easily accessible. I also re-ordered the menu from large to small to have it make a bit more sense.
2021-12-27 01:02:05 +01:00
Gnome Ann
8742453f95
Add safeguards for token budget and text formatting
...
* Error messages are now shown when memory, author's note, etc. exceeds
budget by itself
* Formatting options no longer break if there are empty chunks in the
story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717
261e8b67dc
Windows Color Workaround
...
Different approach that activates this in windows, hopefully now Linux compatible.
2021-12-26 22:46:15 +01:00
henk717
1107a1386d
Enable Terminal Colors
...
Enable Windows Terminal color support based on feedback from Jojorne. If this gives issues on Linux we can move it to play.bat instead.
2021-12-26 22:22:24 +01:00
Gnome Ann
32a0d7c453
More Lua API fixes
...
* Removed `vars.model_orig`
* `requirex()` in bridge.lua now maintains a separate module cache for each
userscript instead of using the same cache for all userscripts
* `vars.lua_deleted` and `vars.lua_edited` are now erased right before running
the input modifiers instead of right before each time the generation modifiers
are run
2021-12-26 12:49:28 -05:00
henk717
b9729749ba
Update aiserver.py
2021-12-26 02:01:57 +01:00