344 Commits

Author SHA1 Message Date
Gnome Ann
3501f03153 Create settings directory if it doesn't exist when using InferKit/OAI 2021-10-21 23:33:32 -04:00
henk717
fa0f8af1d6
Merge branch 'KoboldAI:main' into united 2021-10-15 08:23:06 +02:00
henk717
9513240dfb
Version bump
Since VE fixed important things in the editor i want users to be able to see this easier
2021-10-15 08:22:32 +02:00
henk717
c854a62549 Clarified GPU Layers
breakmodel_layers and layers is confusing, changed the new method to breakmodel_gpulayers. The old one should no longer be used by people, but since it works in reverse we leave it in so scripts don't break.
2021-10-06 18:55:01 +02:00
henk717
bd063f7590
Merge pull request #19 from VE-FORBRYDERNE/multi-gpu
Multiple GPU support
2021-10-06 18:50:58 +02:00
henk717
82c7eaffb5
Merge branch 'KoboldAI:main' into united 2021-10-06 00:26:08 +02:00
henk717
8893916fef
Don't always submit prompt by default
Feedback from users is that its better to not always submit the prompt, this is consistent with the randomly generated stories. You can always toggle it on if you need this for coherency. This change does not override existing user settings.
2021-10-06 00:25:05 +02:00
Gnome Ann
aa59f8b4b2 Fix CPU layers not displaying correctly when using --layers 2021-10-05 11:29:47 -04:00
Gnome Ann
91352ea9f1 Change the command line flags for breakmodel 2021-10-05 11:22:09 -04:00
Gnome Ann
a1e4405aa6 Automatically use breakmodel instead of GPU-only where supported
There's really no reason to use GPU-only mode if breakmodel is supported
because breakmodel can run in GPU-only mode too.
2021-10-05 10:36:51 -04:00
Gnome Ann
fb90a7ed17 Change the help text for breakmodel to be more helpful 2021-10-05 10:31:28 -04:00
Gnome Ann
231621e7c2 Use AutoModelForCausalLM for custom models with a model_type 2021-10-05 09:45:12 -04:00
Gnome Ann
a283d34b27 Multiple GPU support 2021-10-05 09:38:57 -04:00
Gnome Ann
a42b580027 Merge branch 'united' into multi-gpu 2021-10-02 11:44:26 -04:00
henk717
dab58d8393
Merge branch 'KoboldAI:main' into united 2021-09-29 17:05:06 +02:00
Gnome Ann
a179bb2820 Bump version number to 1.16.2 2021-09-28 21:50:33 -04:00
Gnome Ann
e6cd28243e Scroll to the bottom of the gamescreen after retrying 2021-09-28 21:34:36 -04:00
Gnome Ann
bb323152d7 Disable vars.recentedit again 2021-09-28 21:24:08 -04:00
Gnome Ann
2b89bcb16e Fix random story generator 2021-09-28 21:04:26 -04:00
Gnome Ann
af93c96c0f Submit Action mode action in Story mode if action is empty 2021-09-28 19:50:00 -04:00
Gnome Ann
9ab1d182ac Guard against empty prompts 2021-09-28 19:48:43 -04:00
henk717
da55ed3b49
Merge branch 'KoboldAI:main' into united 2021-09-28 10:41:01 +02:00
Gnome Ann
03c1a3ebf9 Put vars.recentedit = True in deleterequest() for consistency 2021-09-28 01:10:20 -04:00
Gnome Ann
97e1760af5 Prevent retry from popping chunks after edit/delete 2021-09-28 01:07:11 -04:00
Gnome Ann
231290608d Do a better job of preventing editing of text when required 2021-09-28 00:48:37 -04:00
Gnome Ann
13b81c7523 Prevent the user from deleting the prompt 2021-09-27 22:21:14 -04:00
henk717
01b30b315f
Merge branch 'KoboldAI:main' into united 2021-09-28 02:31:20 +02:00
Gnome Ann
e29e7b11ec Bump version number to 1.16.1 2021-09-27 18:12:15 -04:00
Gnome Ann
a327eed2c3 Fix editor scrolling issues 2021-09-27 17:44:22 -04:00
henk717
01339f0b87
Merge branch 'KoboldAI:main' into united 2021-09-25 17:44:51 +02:00
Gnome Ann
5893e495b6 Change AutoModel to AutoModelForCausalLM
This fixes breakmodel mode for the official models from the model
selection menu.
2021-09-25 11:41:15 -04:00
henk717
c9290d02dc Update aiserver.py
Better way of checking for the model type
2021-09-25 16:50:24 +02:00
henk717
7d35f825c6 Huggingface GPT-J Support
Finetune's fork has unofficial support which we supported, but this is not compatible with models designed for the official version. In this update we let models decide which transformers backend to use, and fall back to Neo if they don't choose any. We also add the 6B to the menu and for the time being switch to the github version of transformers to be ahead of the waiting time. (Hopefully we can switch back to the conda version before merging upstream).
2021-09-25 16:26:17 +02:00
Gnome Ann
4d9eab3785 K80 test 2021-09-23 20:57:18 -04:00
henk717
6520cac75d
Support models that are formatted with CRLF
A new model was released that uses a different formatting for its enters, this causes to many enters in the UI. In this change we fix the issue so that when this happens the UI still displays the content as you would expect. Removing the formatting burden from the Model developers.
2021-09-22 00:34:05 +02:00
henk717
30a7e945a1
Merge pull request #18 from VE-FORBRYDERNE/doc
Correct misindicated model VRAM requirements
2021-09-21 18:54:19 +02:00
henk717
dd1c3ab67e Allow models to set formatting defaults
Originally omitted when model settings were forced. Now that models can only define the defaults for KoboldAI its a good idea to give model authors control over what formatting they think works best for their models.
2021-09-21 15:46:54 +02:00
Gnome Ann
bbf2bd4026 Correct misindicated model VRAM requirements 2021-09-20 18:49:17 -04:00
Gnome Ann
8df2ccae5b Update client-side story name when saving
If you save a story as a different name than it was loaded with, and
then try to download it as JSON/plaintext, the downloaded file's name
will now match the new story name.
2021-09-19 23:40:52 -04:00
Gnome Ann
99d2ce6887 Don't broadcast getanote and requestwiitem
This prevents duplicate submissions when multiple people are connected
to the same server and one person submits changes to memory, author's
note or world info, by pressing Submit (for author's note or memory) or
Accept (for world info).
2021-09-19 17:00:14 -04:00
Gnome Ann
da03360e92 Fix filename/memory/AN not syncing when downloading in some cases 2021-09-19 14:46:30 -04:00
Gnome Ann
b5883148a5 Download Story as JSON/Plaintext no longer requires server 2021-09-19 11:41:37 -04:00
henk717
b264823fed More polishing
Improved the default settings, better distinction on client / server. The python parts have been renamed to server, the browser to the client to be conform what you'd expect from a client and a server. The model name will also be shown now instead of NeoCustom.
2021-09-18 21:50:23 +02:00
henk717
1df051a420 Settings per Model
Models can no longer override client settings, instead settings are now saved on a model per model basis with the settings provided by the model being the default. Users can also specify the desired configuration name as a command line parameter to avoid conflicting file names (Such as all Colabs having Colab.settings by default).
2021-09-18 21:18:58 +02:00
henk717
fbd07d82d7 Allow models to override some settings
Many models have that one setting that just work best, like repetition penalty 2 or 1.2 while being incompatible with existing settings. Same applies for Adventure mode on or off. With this change models are allowed to override user preferences but only for the categories we deem this relevant (We don't want them to mess with things like tokens, length, etc). For users that do not want this behavior this can be turned off by changing msoverride to false in the client.settings.

Model creators can specify these settings in their config.json with the allowed settings being identical to their client.settings counterparts.
2021-09-18 18:08:50 +02:00
henk717
a651400870 Readme improvements, badwords replacement
Bit of a workaround for now, but the [ badwords search routine has been replaced with a hardcoded list used by the colabs. This is far more effective at filtering out artifacts when running models locally. We can get away with this because all known models use the same vocab.json, in the future we will probably want to load this from badwords.json if present so model creators can bundle this with the model.
2021-09-18 02:16:17 +02:00
henk717
753177a87e Further Readme Progress
More model descriptions, the beginning of the downloadable model section. Lacks download links for now.
2021-09-17 17:59:17 +02:00
henk717
6668bada47 New documentation
Replaces the placeholder readme with a proper one, the menu is also updated and reorganized to encourage users to use custom models and to better reflect the real world VRAM requirements.
2021-09-02 14:04:25 +02:00
Gnome Ann
24d57a7ac3 Clip off ".json" from story name when downloading 2021-09-01 14:07:56 -04:00
Gnome Ann
7e1b1add11 Don't import breakmodel until it's actually needed
breakmodel imports torch which takes a long time to import.
We should delay the importing of torch as long as possible.
2021-09-01 14:04:37 -04:00