344 Commits

Author SHA1 Message Date
henk717
9bcc24c07e
Merge pull request #58 from VE-FORBRYDERNE/xmap
Dynamic TPU backend xmaps
2022-01-15 16:20:58 +01:00
Gnome Ann
877fa39b8a Change TPU regeneration indicator message 2022-01-14 23:21:27 -05:00
Gnome Ann
bdfde33e8a Add an indicator for when dynamic WI scan is triggered in TPU Colabs 2022-01-14 23:13:55 -05:00
Gnome Ann
e0fdce2cc6 Fix TPU generation modifier 2022-01-14 23:00:06 -05:00
Gnome Ann
932c393d6a Add TPU support for dynamic WI scan and generation modifiers 2022-01-14 21:39:02 -05:00
henk717
53b91c6406 Small changes 2022-01-14 02:03:46 +01:00
Gnome Ann
a3d6dc93e8 xmaps for moving things onto TPU 2022-01-12 21:45:30 -05:00
henk717
49e2bcab1a Allow unique chatnames in multiplayer
No longer update the chatname outside of the config, this will not effect singleplayer tab at all, but it will allow people in multiplayer to chat with their own names.
2022-01-11 21:31:44 +01:00
henk717
3f88b4f840 Server clarification
To prevent confusion with users who have not used KoboldAI for a while, or who are following old tutorials I have added a disclaimer that informs people that most Colab links should not be used with this feature and instead opened in the browser.
2022-01-11 00:35:20 +01:00
henk717
d2947bd1cc Small model description update 2022-01-11 00:29:35 +01:00
Gnome Ann
fbc3a73c0f Compile TPU backend in background 2022-01-07 13:47:21 -05:00
Gnome Ann
01479c29ea Fix the type hint for bridged_kwarg decorator 2022-01-04 20:48:34 -05:00
Gnome Ann
fc6caa0df0 Easier method of adding kwargs to bridged in aiserver.py 2022-01-04 19:36:21 -05:00
Gnome Ann
fbf5062074 Add option to compute_context() to not scan story 2022-01-04 19:26:59 -05:00
Gnome Ann
6edc6387f4 Accept command line arguments in KOBOLDAI_ARGS environment var
So that you can use gunicorn or whatever with command-line arguments by
passing the arguments in an environment variable.
2022-01-04 17:11:14 -05:00
Gnome Ann
aa86c6001c --breakmodel_gpublocks should handle -1 properly now 2022-01-04 14:43:37 -05:00
Gnome Ann
e20452ddd8 Retrying random story generation now also remembers memory 2022-01-04 14:40:10 -05:00
Gnome Ann
f46ebd2359 Always pass 1.1 as repetition penalty to generator
The `dynamic_processor_wrap` makes it so that the repetition penalty is
read directly from `vars`, but this only works if the initial repetition
sent to `generator` is not equal to 1. So we are now forcing the initial
repetition penalty to be something other than 1.
2022-01-04 14:18:58 -05:00
Gnome Ann
63bb76b073 Make sure vars.wifolders_u is set up properly on loading a save 2022-01-04 14:13:36 -05:00
Gnome Ann
b88d49e359 Make all WI commands use UIDs instead of nums 2021-12-31 21:22:51 -05:00
Gnome Ann
ccfafe4f0a Lua API fixes for deleting/editing story chunks 2021-12-31 18:28:03 -05:00
Gnome Ann
7241188408 Make sure tokenizer is initialized when used in read-only mode 2021-12-31 17:13:11 -05:00
henk717
796c71b7f7 ANTemplate in Model Configs
This commit exposes antemplates to the model config, this lets authors specify what kind of authors notes template they would like to use for their model. Users can still change it if they desire.
2021-12-31 00:11:18 +01:00
henk717
455dbd503b
Merge pull request #52 from VE-FORBRYDERNE/memory
Random story persist, author's note template and changes to behaviour when not connected to server
2021-12-30 23:57:55 +01:00
henk717
557d062381 Finetuneanon Models
Uploaded with permission, so now Finetuneanon's models can be added to the main menu
2021-12-30 14:16:04 +01:00
Gnome Ann
4d06ebb45a Consistent capitalization of "Author's note" 2021-12-30 01:48:25 -05:00
Gnome Ann
de8a5046df Make sure we don't keep the trimmed memory in randomGameRequest() 2021-12-30 01:45:27 -05:00
Gnome Ann
276f24029e Author's Note Template 2021-12-29 23:43:36 -05:00
Gnome Ann
7573f64bf2 Add Memory box to Random Story dialog and "Random Story Persist" 2021-12-29 23:15:59 -05:00
Gnome Ann
8e2e3baed5 Fix AI output text flash showing up on wrong chunk 2021-12-29 14:23:22 -05:00
henk717
7a4834b8d0 Chatname Fix
Sends the chatname to the client
2021-12-27 18:52:06 +01:00
henk717
88f6e8ca38 Chatmode improvements
Blank lines appear often in chatmode so it is best played with blank line removal turned on, this is now forced. Its not compatible with Adventure mode, so they now turn each other off.
2021-12-27 13:32:25 +01:00
henk717
bbd68020a5
Merge pull request #50 from VE-FORBRYDERNE/potluck
Chat mode GUI, and Lua and random story generator bug fixes
2021-12-27 11:20:20 +01:00
Gnome Ann
a4087b93e9 Fix random story retry, for real this time 2021-12-26 22:51:07 -05:00
Gnome Ann
1189781eac Show a text box for chat name when Chat Mode is enabled 2021-12-26 22:21:58 -05:00
henk717
5b22b0f344 Update aiserver.py 2021-12-27 02:44:36 +01:00
henk717
4d7f222758 Update aiserver.py 2021-12-27 02:32:59 +01:00
henk717
6d1bf76ef1 Path Fixes
Fixes the tokenizer cache being hit when we already have a local model
2021-12-27 01:56:59 +01:00
Gnome Ann
9288a3de2f Allow retry button to regenerate random story 2021-12-26 19:52:56 -05:00
Gnome Ann
1ff563ebda Fix random story generator when No Prompt Gen is enabled 2021-12-26 19:40:20 -05:00
henk717
1a64f8bdc4 More Models
Added more models in the menu, all the popular community models are now easily accessible. I also re-ordered the menu from large to small to have it make a bit more sense.
2021-12-27 01:02:05 +01:00
Gnome Ann
8742453f95 Add safeguards for token budget and text formatting
* Error messages are now shown when memory, author's note, etc. exceeds
  budget by itself
* Formatting options no longer break if there are empty chunks in the
  story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717
261e8b67dc Windows Color Workaround
Different approach that activates this in windows, hopefully now Linux compatible.
2021-12-26 22:46:15 +01:00
henk717
1107a1386d Enable Terminal Colors
Enable Windows Terminal color support based on feedback from Jojorne. If this gives issues on Linux we can move it to play.bat instead.
2021-12-26 22:22:24 +01:00
Gnome Ann
32a0d7c453 More Lua API fixes
* Removed `vars.model_orig`
* `requirex()` in bridge.lua now maintains a separate module cache for each
  userscript instead of using the same cache for all userscripts
* `vars.lua_deleted` and `vars.lua_edited` are now erased right before running
  the input modifiers instead of right before each time the generation modifiers
  are run
2021-12-26 12:49:28 -05:00
henk717
b9729749ba Update aiserver.py 2021-12-26 02:01:57 +01:00
henk717
ddd9bded30 Store chatname
Also save the chatname in the settings for later re-use by the user.
2021-12-26 01:55:27 +01:00
henk717
d234f67a90 Chat Mode
The Initial commit for Chat Mode, the nickname part of the UI is missing other than that it should be fully functional. To use Chat Mode effectively you first input a small dialogue (Can be around 6 lines 3 of your own inputs and 3 of the character) formatted as Name : it will then automate the actions needed to chat properly. During this mode single line mode is forced on, and Trim Incomplete Sentences is forced off.
2021-12-26 01:51:32 +01:00
henk717
14e5fcd355 AutoTokenizer 2021-12-25 00:48:12 +01:00
henk717
e1cd34268b AutoTokenizer
Futureproofing for future tokenizers, for now this is not needed since everything uses GPT2. But when that changes we want to be prepared. Not all models have a proper tokenizer config, so if we can't find one we fall back to GPT2.
2021-12-25 00:44:26 +01:00