Commit Graph

209 Commits

Author SHA1 Message Date
Gnome Ann de3664e73c Add an assertion for the value of already_generated 2021-11-03 12:01:45 -04:00
Gnome Ann ec8ec55256 Dynamic world info scan 2021-11-03 11:54:48 -04:00
henk717 aa998ba5e9
Merge pull request #20 from VE-FORBRYDERNE/sp
Soft prompt support for PyTorch models
2021-10-30 00:35:44 +02:00
Gnome Ann 206c01008e Fix budget calculation when using soft prompt 2021-10-29 11:44:51 -04:00
henk717 c9c370aa17
Merge branch 'KoboldAI:main' into united 2021-10-28 23:29:29 +02:00
Gnome Ann bf4e7742ac Patch GPTJForCausalLM, if it exists, to support soft prompting 2021-10-28 17:18:28 -04:00
Gnome Ann 40b4631f6c Clamp input_ids in place
Apparently transformers maintains an internal reference to input_ids
(to use for repetition penalty) so we have to clamp the internal
version, too, because otherwise transformers will throw an out-of-bounds
error upon attempting to access token IDs that are not in the
vocabulary.
2021-10-28 16:52:39 -04:00
Gnome Ann 24d5d63c9f Use the correct generation min and max when using soft prompt 2021-10-28 16:39:59 -04:00
Gnome Ann 511817132a Don't change the shape of transformer.wte 2021-10-28 15:39:59 -04:00
Gnome Ann a1ae11630a Make sure to cast vars.sp to the correct dtype 2021-10-28 13:22:07 -04:00
Gnome Ann 1556bd32a5 Use torch.where to inject the soft prompt instead of torch.cat 2021-10-28 13:20:14 -04:00
Gnome Ann 248e0bd24b Fix soft prompt loading code 2021-10-28 00:29:42 -04:00
Gnome Ann 4e3cc93020 Merge branch 'united' into sp 2021-10-23 11:45:03 -04:00
henk717 7b73d7cfdd Single Line Mode
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.

This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
Gnome Ann 1f449a9dda Soft prompt support (6B Colabs not supported yet) 2021-10-22 14:18:10 -04:00
Gnome Ann 3501f03153 Create settings directory if it doesn't exist when using InferKit/OAI 2021-10-21 23:33:32 -04:00
henk717 fa0f8af1d6
Merge branch 'KoboldAI:main' into united 2021-10-15 08:23:06 +02:00
henk717 9513240dfb
Version bump
Since VE fixed important things in the editor i want users to be able to see this easier
2021-10-15 08:22:32 +02:00
henk717 c854a62549 Clarified GPU Layers
breakmodel_layers and layers is confusing, changed the new method to breakmodel_gpulayers. The old one should no longer be used by people, but since it works in reverse we leave it in so scripts don't break.
2021-10-06 18:55:01 +02:00
henk717 bd063f7590
Merge pull request #19 from VE-FORBRYDERNE/multi-gpu
Multiple GPU support
2021-10-06 18:50:58 +02:00
henk717 82c7eaffb5
Merge branch 'KoboldAI:main' into united 2021-10-06 00:26:08 +02:00
henk717 8893916fef
Don't always submit prompt by default
Feedback from users is that its better to not always submit the prompt, this is consistent with the randomly generated stories. You can always toggle it on if you need this for coherency. This change does not override existing user settings.
2021-10-06 00:25:05 +02:00
Gnome Ann aa59f8b4b2 Fix CPU layers not displaying correctly when using --layers 2021-10-05 11:29:47 -04:00
Gnome Ann 91352ea9f1 Change the command line flags for breakmodel 2021-10-05 11:22:09 -04:00
Gnome Ann a1e4405aa6 Automatically use breakmodel instead of GPU-only where supported
There's really no reason to use GPU-only mode if breakmodel is supported
because breakmodel can run in GPU-only mode too.
2021-10-05 10:36:51 -04:00
Gnome Ann fb90a7ed17 Change the help text for breakmodel to be more helpful 2021-10-05 10:31:28 -04:00
Gnome Ann 231621e7c2 Use AutoModelForCausalLM for custom models with a model_type 2021-10-05 09:45:12 -04:00
Gnome Ann a283d34b27 Multiple GPU support 2021-10-05 09:38:57 -04:00
Gnome Ann a42b580027 Merge branch 'united' into multi-gpu 2021-10-02 11:44:26 -04:00
henk717 dab58d8393
Merge branch 'KoboldAI:main' into united 2021-09-29 17:05:06 +02:00
Gnome Ann a179bb2820 Bump version number to 1.16.2 2021-09-28 21:50:33 -04:00
Gnome Ann e6cd28243e Scroll to the bottom of the gamescreen after retrying 2021-09-28 21:34:36 -04:00
Gnome Ann bb323152d7 Disable vars.recentedit again 2021-09-28 21:24:08 -04:00
Gnome Ann 2b89bcb16e Fix random story generator 2021-09-28 21:04:26 -04:00
Gnome Ann af93c96c0f Submit Action mode action in Story mode if action is empty 2021-09-28 19:50:00 -04:00
Gnome Ann 9ab1d182ac Guard against empty prompts 2021-09-28 19:48:43 -04:00
henk717 da55ed3b49
Merge branch 'KoboldAI:main' into united 2021-09-28 10:41:01 +02:00
Gnome Ann 03c1a3ebf9 Put vars.recentedit = True in deleterequest() for consistency 2021-09-28 01:10:20 -04:00
Gnome Ann 97e1760af5 Prevent retry from popping chunks after edit/delete 2021-09-28 01:07:11 -04:00
Gnome Ann 231290608d Do a better job of preventing editing of text when required 2021-09-28 00:48:37 -04:00
Gnome Ann 13b81c7523 Prevent the user from deleting the prompt 2021-09-27 22:21:14 -04:00
henk717 01b30b315f
Merge branch 'KoboldAI:main' into united 2021-09-28 02:31:20 +02:00
Gnome Ann e29e7b11ec Bump version number to 1.16.1 2021-09-27 18:12:15 -04:00
Gnome Ann a327eed2c3 Fix editor scrolling issues 2021-09-27 17:44:22 -04:00
henk717 01339f0b87
Merge branch 'KoboldAI:main' into united 2021-09-25 17:44:51 +02:00
Gnome Ann 5893e495b6 Change AutoModel to AutoModelForCausalLM
This fixes breakmodel mode for the official models from the model
selection menu.
2021-09-25 11:41:15 -04:00
henk717 c9290d02dc Update aiserver.py
Better way of checking for the model type
2021-09-25 16:50:24 +02:00
henk717 7d35f825c6 Huggingface GPT-J Support
Finetune's fork has unofficial support which we supported, but this is not compatible with models designed for the official version. In this update we let models decide which transformers backend to use, and fall back to Neo if they don't choose any. We also add the 6B to the menu and for the time being switch to the github version of transformers to be ahead of the waiting time. (Hopefully we can switch back to the conda version before merging upstream).
2021-09-25 16:26:17 +02:00
Gnome Ann 4d9eab3785 K80 test 2021-09-23 20:57:18 -04:00
henk717 6520cac75d
Support models that are formatted with CRLF
A new model was released that uses a different formatting for its enters, this causes to many enters in the UI. In this change we fix the issue so that when this happens the UI still displays the content as you would expect. Removing the formatting burden from the Model developers.
2021-09-22 00:34:05 +02:00
henk717 30a7e945a1
Merge pull request #18 from VE-FORBRYDERNE/doc
Correct misindicated model VRAM requirements
2021-09-21 18:54:19 +02:00
henk717 dd1c3ab67e Allow models to set formatting defaults
Originally omitted when model settings were forced. Now that models can only define the defaults for KoboldAI its a good idea to give model authors control over what formatting they think works best for their models.
2021-09-21 15:46:54 +02:00
Gnome Ann bbf2bd4026 Correct misindicated model VRAM requirements 2021-09-20 18:49:17 -04:00
Gnome Ann 8df2ccae5b Update client-side story name when saving
If you save a story as a different name than it was loaded with, and
then try to download it as JSON/plaintext, the downloaded file's name
will now match the new story name.
2021-09-19 23:40:52 -04:00
Gnome Ann 99d2ce6887 Don't broadcast getanote and requestwiitem
This prevents duplicate submissions when multiple people are connected
to the same server and one person submits changes to memory, author's
note or world info, by pressing Submit (for author's note or memory) or
Accept (for world info).
2021-09-19 17:00:14 -04:00
Gnome Ann da03360e92 Fix filename/memory/AN not syncing when downloading in some cases 2021-09-19 14:46:30 -04:00
Gnome Ann b5883148a5 Download Story as JSON/Plaintext no longer requires server 2021-09-19 11:41:37 -04:00
henk717 b264823fed More polishing
Improved the default settings, better distinction on client / server. The python parts have been renamed to server, the browser to the client to be conform what you'd expect from a client and a server. The model name will also be shown now instead of NeoCustom.
2021-09-18 21:50:23 +02:00
henk717 1df051a420 Settings per Model
Models can no longer override client settings, instead settings are now saved on a model per model basis with the settings provided by the model being the default. Users can also specify the desired configuration name as a command line parameter to avoid conflicting file names (Such as all Colabs having Colab.settings by default).
2021-09-18 21:18:58 +02:00
henk717 fbd07d82d7 Allow models to override some settings
Many models have that one setting that just work best, like repetition penalty 2 or 1.2 while being incompatible with existing settings. Same applies for Adventure mode on or off. With this change models are allowed to override user preferences but only for the categories we deem this relevant (We don't want them to mess with things like tokens, length, etc). For users that do not want this behavior this can be turned off by changing msoverride to false in the client.settings.

Model creators can specify these settings in their config.json with the allowed settings being identical to their client.settings counterparts.
2021-09-18 18:08:50 +02:00
henk717 a651400870 Readme improvements, badwords replacement
Bit of a workaround for now, but the [ badwords search routine has been replaced with a hardcoded list used by the colabs. This is far more effective at filtering out artifacts when running models locally. We can get away with this because all known models use the same vocab.json, in the future we will probably want to load this from badwords.json if present so model creators can bundle this with the model.
2021-09-18 02:16:17 +02:00
henk717 753177a87e Further Readme Progress
More model descriptions, the beginning of the downloadable model section. Lacks download links for now.
2021-09-17 17:59:17 +02:00
henk717 6668bada47 New documentation
Replaces the placeholder readme with a proper one, the menu is also updated and reorganized to encourage users to use custom models and to better reflect the real world VRAM requirements.
2021-09-02 14:04:25 +02:00
Gnome Ann 24d57a7ac3 Clip off ".json" from story name when downloading 2021-09-01 14:07:56 -04:00
Gnome Ann 7e1b1add11 Don't import breakmodel until it's actually needed
breakmodel imports torch which takes a long time to import.
We should delay the importing of torch as long as possible.
2021-09-01 14:04:37 -04:00
Gnome Ann 6bd6415749 Prevent remote-mode-forbidden actions server-side
Since some user interface buttons are disabled while in --remote mode,
they should also be disabled in aiserver.py so a malicious user can't
manually send those commands to the server.
2021-09-01 13:55:25 -04:00
Gnome Ann 8ae9304cda Clean up code for saving story as plaintext 2021-09-01 13:49:04 -04:00
Gnome Ann 543acf9ba4 Also allow downloading stories as plaintext 2021-09-01 13:46:37 -04:00
Gnome Ann fab51b64a3 Don't leave memory mode when downloading 2021-09-01 13:36:05 -04:00
Gnome Ann b5d9aaf785 Remember to actually import "Response" from flask 2021-09-01 13:31:05 -04:00
Gnome Ann 4e9b371564 Merge branch 'united' into story-manager 2021-09-01 13:25:28 -04:00
Gnome Ann 16184ceee8 Catch and display errors from "Save As" 2021-09-01 12:58:01 -04:00
henk717 4151fd1b6a Save story in plain text along the save
Not just saving in .json but also in plain text, should help story writers get their stories out more easily. Especially since they can technically add some markdown into their stories manually in the interface.
2021-09-01 17:41:18 +02:00
henk717 9b3e298089
Foundation for in browser downloading
This adds /download as a URL to immediately download the file, this will allow html changes that initiate a file download.
2021-09-01 15:58:56 +02:00
Gnome Ann c276220a35 Allow deleting and renaming stories in the browser 2021-08-31 18:22:30 -04:00
Gnome Ann 63a4048053 Remove typing.Literal (a Python 3.8+ feature) 2021-08-26 15:38:58 -04:00
Gnome Ann 75c68c2b78 Fix world info depth being ignored 2021-08-26 12:50:17 -04:00
Gnome Ann d7605a717b Merge branch 'united' into big-o
This resolves two merge conflicts that arose because this branch was
a few commits behind.
2021-08-26 01:37:40 -04:00
Gnome Ann 8fd8612cca Adventure mode colouring now controlled by a CSS class
So that we can just toggle the class instead of having aiserver.py send
back the entire story.
2021-08-26 01:06:57 -04:00
Gnome Ann 27c7baab92 Prevent some errors when the prompt is the only chunk 2021-08-25 23:58:12 -04:00
Gnome Ann b0d64985bb Fix Retry and Back buttons popping the wrong chunk 2021-08-25 19:56:57 -04:00
henk717 bbd5bd0cd7
Merge pull request #8 from VE-FORBRYDERNE/misc
General usability fixes
2021-08-26 01:56:42 +02:00
Gnome Ann 796f5ffd05 Make vars.actions a dictionary instead of a list 2021-08-25 19:28:26 -04:00
Gnome Ann 6dcd7888c8 Change "recieved" to "received" 2021-08-25 14:55:26 -04:00
Gnome Ann c3528e6221 Retry after Back no longer pops an extra story chunk 2021-08-25 14:54:51 -04:00
Gnome Ann cf677c60fc Stability fixes for back/retry re genseqs/useprompt
* Back and Retry buttons no longer pop a story chunk while in the
  "Select sequence to keep" menu
* No longer freezes if you retry with no story chunks beyond the initial
  prompt chunk
* When "Always Add Prompt" is on, allow Retry even if the prompt is the
  only chunk in the story
* Added error messages for Back and Retry buttons
2021-08-25 14:42:37 -04:00
henk717 d848d03d60
Merge pull request #7 from VE-FORBRYDERNE/wi-constant
Constant world info keys
2021-08-25 20:14:19 +02:00
henk717 3da0c3d24a Remote improvements
Some colab's use KoboldAI as a subprocess, rather than making that to complicated for Colab developers its better to just dump the Cloudflare link to a log, in addition to showing the message on screen. That way if KoboldAI itself gets filtered you can easily cat the link or use the existing link grabbing methods.
2021-08-25 13:57:38 +02:00
Gnome Ann b1c6aee8d3 Integrate inline chunk editor and Adventure mode with Javalar's branch 2021-08-24 19:02:52 -04:00
Gnome Ann 735fc9431b Still HTML-escape chunks if Adventure is off
(cherry picked from commit 3409d8c12e3fbb1e3232f2df82740b012e8f3604)
2021-08-24 18:46:34 -04:00
Gnome Ann 09030573e5 Broadcast updatechunk and removechunk 2021-08-24 18:40:12 -04:00
Gnome Ann 6d5845ff8d Merge https://github.com/KoboldAI/KoboldAI-Client/pull/45 into big-o 2021-08-24 17:27:50 -04:00
Gnome Ann 2a7c6244cb Constant world info keys 2021-08-24 13:45:20 -04:00
Gnome Ann 90e558cf3f Won't freeze anymore if you delete the prompt 2021-08-24 11:24:29 -04:00
henk717 f0962155b8
Merge pull request #5 from VE-FORBRYDERNE/editable-chunks
Scroll down on submit
2021-08-24 01:22:57 +02:00
Gnome Ann 13ce16b859 Scroll down on submit 2021-08-23 19:19:36 -04:00
henk717 c108e080bf Various Fixes
Various Fixes, mostly to make the UI play a little nicer in the new edit mode. Also reverted and optimized some of the setting stuff.
2021-08-24 01:18:09 +02:00
Gnome Ann 3bf467e63c Added dedicated inline editing commands to aiserver.py
It's a lot faster now.
2021-08-23 18:52:45 -04:00
henk717 a151e1a33a Small fix for Authors Notes in multiplayer
Multiplayer support was causing all players to automatically submit authors notes. This is now fixed only the person submitting the authors notes counts.
2021-08-22 15:54:35 +02:00
henk717 09ec15c91b
Merge pull request #3 from VE-FORBRYDERNE/breakmodel
Low VRAM patch
2021-08-21 21:03:46 +02:00