Commit Graph

701 Commits

Author SHA1 Message Date
511817132a Don't change the shape of transformer.wte 2021-10-28 15:39:59 -04:00
a1ae11630a Make sure to cast vars.sp to the correct dtype 2021-10-28 13:22:07 -04:00
1556bd32a5 Use torch.where to inject the soft prompt instead of torch.cat 2021-10-28 13:20:14 -04:00
248e0bd24b Fix soft prompt loading code 2021-10-28 00:29:42 -04:00
4e3cc93020 Merge branch 'united' into sp 2021-10-23 11:45:03 -04:00
7b73d7cfdd Single Line Mode
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.

This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
1f449a9dda Soft prompt support (6B Colabs not supported yet) 2021-10-22 14:18:10 -04:00
3501f03153 Create settings directory if it doesn't exist when using InferKit/OAI 2021-10-21 23:33:32 -04:00
fa0f8af1d6 Merge branch 'KoboldAI:main' into united 2021-10-15 08:23:06 +02:00
9513240dfb Version bump
Since VE fixed important things in the editor i want users to be able to see this easier
2021-10-15 08:22:32 +02:00
c854a62549 Clarified GPU Layers
breakmodel_layers and layers is confusing, changed the new method to breakmodel_gpulayers. The old one should no longer be used by people, but since it works in reverse we leave it in so scripts don't break.
2021-10-06 18:55:01 +02:00
bd063f7590 Merge pull request #19 from VE-FORBRYDERNE/multi-gpu
Multiple GPU support
2021-10-06 18:50:58 +02:00
82c7eaffb5 Merge branch 'KoboldAI:main' into united 2021-10-06 00:26:08 +02:00
8893916fef Don't always submit prompt by default
Feedback from users is that its better to not always submit the prompt, this is consistent with the randomly generated stories. You can always toggle it on if you need this for coherency. This change does not override existing user settings.
2021-10-06 00:25:05 +02:00
aa59f8b4b2 Fix CPU layers not displaying correctly when using --layers 2021-10-05 11:29:47 -04:00
91352ea9f1 Change the command line flags for breakmodel 2021-10-05 11:22:09 -04:00
a1e4405aa6 Automatically use breakmodel instead of GPU-only where supported
There's really no reason to use GPU-only mode if breakmodel is supported
because breakmodel can run in GPU-only mode too.
2021-10-05 10:36:51 -04:00
fb90a7ed17 Change the help text for breakmodel to be more helpful 2021-10-05 10:31:28 -04:00
231621e7c2 Use AutoModelForCausalLM for custom models with a model_type 2021-10-05 09:45:12 -04:00
a283d34b27 Multiple GPU support 2021-10-05 09:38:57 -04:00
a42b580027 Merge branch 'united' into multi-gpu 2021-10-02 11:44:26 -04:00
dab58d8393 Merge branch 'KoboldAI:main' into united 2021-09-29 17:05:06 +02:00
a179bb2820 Bump version number to 1.16.2 2021-09-28 21:50:33 -04:00
e6cd28243e Scroll to the bottom of the gamescreen after retrying 2021-09-28 21:34:36 -04:00
bb323152d7 Disable vars.recentedit again 2021-09-28 21:24:08 -04:00
2b89bcb16e Fix random story generator 2021-09-28 21:04:26 -04:00
af93c96c0f Submit Action mode action in Story mode if action is empty 2021-09-28 19:50:00 -04:00
9ab1d182ac Guard against empty prompts 2021-09-28 19:48:43 -04:00
da55ed3b49 Merge branch 'KoboldAI:main' into united 2021-09-28 10:41:01 +02:00
03c1a3ebf9 Put vars.recentedit = True in deleterequest() for consistency 2021-09-28 01:10:20 -04:00
97e1760af5 Prevent retry from popping chunks after edit/delete 2021-09-28 01:07:11 -04:00
231290608d Do a better job of preventing editing of text when required 2021-09-28 00:48:37 -04:00
13b81c7523 Prevent the user from deleting the prompt 2021-09-27 22:21:14 -04:00
01b30b315f Merge branch 'KoboldAI:main' into united 2021-09-28 02:31:20 +02:00
e29e7b11ec Bump version number to 1.16.1 2021-09-27 18:12:15 -04:00
a327eed2c3 Fix editor scrolling issues 2021-09-27 17:44:22 -04:00
01339f0b87 Merge branch 'KoboldAI:main' into united 2021-09-25 17:44:51 +02:00
5893e495b6 Change AutoModel to AutoModelForCausalLM
This fixes breakmodel mode for the official models from the model
selection menu.
2021-09-25 11:41:15 -04:00
c9290d02dc Update aiserver.py
Better way of checking for the model type
2021-09-25 16:50:24 +02:00
7d35f825c6 Huggingface GPT-J Support
Finetune's fork has unofficial support which we supported, but this is not compatible with models designed for the official version. In this update we let models decide which transformers backend to use, and fall back to Neo if they don't choose any. We also add the 6B to the menu and for the time being switch to the github version of transformers to be ahead of the waiting time. (Hopefully we can switch back to the conda version before merging upstream).
2021-09-25 16:26:17 +02:00
4d9eab3785 K80 test 2021-09-23 20:57:18 -04:00
6520cac75d Support models that are formatted with CRLF
A new model was released that uses a different formatting for its enters, this causes to many enters in the UI. In this change we fix the issue so that when this happens the UI still displays the content as you would expect. Removing the formatting burden from the Model developers.
2021-09-22 00:34:05 +02:00
30a7e945a1 Merge pull request #18 from VE-FORBRYDERNE/doc
Correct misindicated model VRAM requirements
2021-09-21 18:54:19 +02:00
dd1c3ab67e Allow models to set formatting defaults
Originally omitted when model settings were forced. Now that models can only define the defaults for KoboldAI its a good idea to give model authors control over what formatting they think works best for their models.
2021-09-21 15:46:54 +02:00
bbf2bd4026 Correct misindicated model VRAM requirements 2021-09-20 18:49:17 -04:00
8df2ccae5b Update client-side story name when saving
If you save a story as a different name than it was loaded with, and
then try to download it as JSON/plaintext, the downloaded file's name
will now match the new story name.
2021-09-19 23:40:52 -04:00
99d2ce6887 Don't broadcast getanote and requestwiitem
This prevents duplicate submissions when multiple people are connected
to the same server and one person submits changes to memory, author's
note or world info, by pressing Submit (for author's note or memory) or
Accept (for world info).
2021-09-19 17:00:14 -04:00
da03360e92 Fix filename/memory/AN not syncing when downloading in some cases 2021-09-19 14:46:30 -04:00
b5883148a5 Download Story as JSON/Plaintext no longer requires server 2021-09-19 11:41:37 -04:00
b264823fed More polishing
Improved the default settings, better distinction on client / server. The python parts have been renamed to server, the browser to the client to be conform what you'd expect from a client and a server. The model name will also be shown now instead of NeoCustom.
2021-09-18 21:50:23 +02:00