Commit Graph

582 Commits

Author SHA1 Message Date
Gnome Ann
35539a8785 Allow require() in userscripts to import built-in modules 2021-12-29 13:47:03 -05:00
henk717
38bad263e1
Allow Repetition Penalty up to 3
A user expressed positive feedback when trying higher than 2 repetition penalty on some models, lets allow people the freedom to do so. If there is a demonstrable benefit to running higher than 3 I am open to raising it again.
2021-12-28 03:22:14 +01:00
henk717
7a4834b8d0 Chatname Fix
Sends the chatname to the client
2021-12-27 18:52:06 +01:00
henk717
88f6e8ca38 Chatmode improvements
Blank lines appear often in chatmode so it is best played with blank line removal turned on, this is now forced. Its not compatible with Adventure mode, so they now turn each other off.
2021-12-27 13:32:25 +01:00
henk717
bbd68020a5
Merge pull request #50 from VE-FORBRYDERNE/potluck
Chat mode GUI, and Lua and random story generator bug fixes
2021-12-27 11:20:20 +01:00
Gnome Ann
a4087b93e9 Fix random story retry, for real this time 2021-12-26 22:51:07 -05:00
Gnome Ann
1189781eac Show a text box for chat name when Chat Mode is enabled 2021-12-26 22:21:58 -05:00
henk717
5b22b0f344 Update aiserver.py 2021-12-27 02:44:36 +01:00
henk717
4d7f222758 Update aiserver.py 2021-12-27 02:32:59 +01:00
henk717
6d1bf76ef1 Path Fixes
Fixes the tokenizer cache being hit when we already have a local model
2021-12-27 01:56:59 +01:00
Gnome Ann
9288a3de2f Allow retry button to regenerate random story 2021-12-26 19:52:56 -05:00
Gnome Ann
1ff563ebda Fix random story generator when No Prompt Gen is enabled 2021-12-26 19:40:20 -05:00
Gnome Ann
6aa2f50045 Add chatmode and chatname fields in bridge.lua 2021-12-26 19:07:44 -05:00
henk717
1a64f8bdc4 More Models
Added more models in the menu, all the popular community models are now easily accessible. I also re-ordered the menu from large to small to have it make a bit more sense.
2021-12-27 01:02:05 +01:00
Gnome Ann
8742453f95 Add safeguards for token budget and text formatting
* Error messages are now shown when memory, author's note, etc. exceeds
  budget by itself
* Formatting options no longer break if there are empty chunks in the
  story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717
261e8b67dc Windows Color Workaround
Different approach that activates this in windows, hopefully now Linux compatible.
2021-12-26 22:46:15 +01:00
henk717
1107a1386d Enable Terminal Colors
Enable Windows Terminal color support based on feedback from Jojorne. If this gives issues on Linux we can move it to play.bat instead.
2021-12-26 22:22:24 +01:00
Gnome Ann
6183ecd669 Fix io-related security issues in Lua sandbox
* `io.lines` with a string as first argument is now disallowed because
  it reads a file given a filename
* `io.input` and `io.output` no longer permit having a string as first
  argument because that would allow access to local files
2021-12-26 14:23:56 -05:00
Gnome Ann
32a0d7c453 More Lua API fixes
* Removed `vars.model_orig`
* `requirex()` in bridge.lua now maintains a separate module cache for each
  userscript instead of using the same cache for all userscripts
* `vars.lua_deleted` and `vars.lua_edited` are now erased right before running
  the input modifiers instead of right before each time the generation modifiers
  are run
2021-12-26 12:49:28 -05:00
henk717
b9729749ba Update aiserver.py 2021-12-26 02:01:57 +01:00
henk717
ddd9bded30 Store chatname
Also save the chatname in the settings for later re-use by the user.
2021-12-26 01:55:27 +01:00
henk717
d234f67a90 Chat Mode
The Initial commit for Chat Mode, the nickname part of the UI is missing other than that it should be fully functional. To use Chat Mode effectively you first input a small dialogue (Can be around 6 lines 3 of your own inputs and 3 of the character) formatted as Name : it will then automate the actions needed to chat properly. During this mode single line mode is forced on, and Trim Incomplete Sentences is forced off.
2021-12-26 01:51:32 +01:00
henk717
14e5fcd355 AutoTokenizer 2021-12-25 00:48:12 +01:00
henk717
e1cd34268b AutoTokenizer
Futureproofing for future tokenizers, for now this is not needed since everything uses GPT2. But when that changes we want to be prepared. Not all models have a proper tokenizer config, so if we can't find one we fall back to GPT2.
2021-12-25 00:44:26 +01:00
henk717
00a0cea077 Update aiserver.py 2021-12-24 23:15:01 +01:00
henk717
1d3370c995 Community model
First batch, will be more, we will also need to update the other VRAM display's with the changes that have happened. Will happen depending on how the 8-bit stuff goes.
2021-12-24 23:12:14 +01:00
henk717
6952938e88 Update aiserver.py
Crash fix
2021-12-24 19:57:08 +01:00
henk717
726b42889b
Merge pull request #49 from VE-FORBRYDERNE/gui-and-scripting
Scripting and GUI improvements
2021-12-24 06:21:12 +01:00
Gnome Ann
b2def30d9d Update Lua modeltype and model API 2021-12-23 16:35:52 -05:00
Gnome Ann
c305076cf3 Fix the behaviour of the softprompt setting 2021-12-23 16:14:09 -05:00
Gnome Ann
4a852d7f95 Fix restorePrompt() in application.js
When the prompt is deleted by the user, the topmost remaining chunk of
the story that has at most one non-whitespace character is now made the
new prompt chunk.

Also fixed issues in Chromium-based browsers (desktop and Android) where
selecting all text in the story, typing some new text to replace the
entire story and then defocusing causes the editor to break.
2021-12-23 15:43:32 -05:00
Gnome Ann
00f611b207 Save softprompt filename in settings 2021-12-23 13:02:11 -05:00
Gnome Ann
924c48a6d7 Merge branch 'united' into gui-and-scripting 2021-12-23 13:02:01 -05:00
henk717
cae0f279e2 Restore Lowmem
Accidentally got replaced in one of my test runs
2021-12-23 18:50:01 +01:00
henk717
25a6e489c1 Remove Replace from Huggingface
Accidentally ended up in the wrong section, for downloads we do not replace anything only afterwards.
2021-12-23 17:27:09 +01:00
henk717
e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717
9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717
be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
Gnome Ann
8452940597 Merge branch 'united' into gui-and-scripting 2021-12-23 00:18:11 -05:00
Gnome Ann
2a4d7448be Make dynamic_processor_wrap execute warper conditionally
The top-k warper doesn't work properly with an argument of 0, so there
is now the ability to not execute the warper if a condition is not met.
2021-12-22 23:46:25 -05:00
Gnome Ann
7e06c25011 Display icons for active userscripts and softprompts
Also fixes the userscript menu so that the active userscripts preserve
the previously selected order as was originally intended.
2021-12-22 23:33:27 -05:00
henk717
a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717
4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717
2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717
81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
henk717
f93d489971 Update install_requirements.bat 2021-12-22 23:57:21 +01:00
Gnome Ann
c549ea04a9 Always use all logit warpers
Now that the logit warper parameters can be changed mid-generation by
generation modifiers, the logit warpers have to be always on.
2021-12-22 17:29:07 -05:00
henk717
9f86ca5be5 Force Temp Location
Conda breaks if the username has spaces when it tries to use temp, added a workaround that forces our directory to be used as temp for kobold.
2021-12-22 21:56:57 +01:00
Gnome Ann
1e1b45d47a Add support for multiple library paths in bridge.lua 2021-12-22 14:24:31 -05:00
Gnome Ann
20a7b6a260 Correct "LuLPEG" to "LuLPeg" 2021-12-22 13:58:01 -05:00