Gnome Ann
c11dab894e
Put placeholder variables into calcsubmitbudget
2021-11-03 18:02:19 -04:00
Gnome Ann
9b18068999
Shallow copy story chunks when generating
2021-11-03 17:53:38 -04:00
Gnome Ann
b8c3d8c12e
Fix generator output having the wrong length
2021-11-03 16:10:12 -04:00
Gnome Ann
5b3ce4510f
Make sure that soft_tokens is on the correct device
2021-11-03 16:07:50 -04:00
Gnome Ann
90fd5a538a
Merge branch 'united' into scan-test
2021-11-03 12:42:18 -04:00
Gnome Ann
fe2987d894
Fix missing break statement in device_config
2021-11-03 12:42:04 -04:00
Gnome Ann
bd76ab333c
Set numseqs to 1 if using dynamic world info scan
2021-11-03 12:28:17 -04:00
Gnome Ann
0a91ea27b3
Make the dynamic world info scan toggleable
2021-11-03 12:18:48 -04:00
Gnome Ann
de3664e73c
Add an assertion for the value of already_generated
2021-11-03 12:01:45 -04:00
Gnome Ann
ec8ec55256
Dynamic world info scan
2021-11-03 11:54:48 -04:00
henk717
aa998ba5e9
Merge pull request #20 from VE-FORBRYDERNE/sp
...
Soft prompt support for PyTorch models
2021-10-30 00:35:44 +02:00
Gnome Ann
206c01008e
Fix budget calculation when using soft prompt
2021-10-29 11:44:51 -04:00
henk717
c9c370aa17
Merge branch 'KoboldAI:main' into united
2021-10-28 23:29:29 +02:00
Gnome Ann
bf4e7742ac
Patch GPTJForCausalLM, if it exists, to support soft prompting
2021-10-28 17:18:28 -04:00
Gnome Ann
40b4631f6c
Clamp input_ids in place
...
Apparently transformers maintains an internal reference to input_ids
(to use for repetition penalty) so we have to clamp the internal
version, too, because otherwise transformers will throw an out-of-bounds
error upon attempting to access token IDs that are not in the
vocabulary.
2021-10-28 16:52:39 -04:00
Gnome Ann
24d5d63c9f
Use the correct generation min and max when using soft prompt
2021-10-28 16:39:59 -04:00
Gnome Ann
511817132a
Don't change the shape of transformer.wte
2021-10-28 15:39:59 -04:00
Gnome Ann
a1ae11630a
Make sure to cast vars.sp to the correct dtype
2021-10-28 13:22:07 -04:00
Gnome Ann
1556bd32a5
Use torch.where to inject the soft prompt instead of torch.cat
2021-10-28 13:20:14 -04:00
Gnome Ann
248e0bd24b
Fix soft prompt loading code
2021-10-28 00:29:42 -04:00
Gnome Ann
4e3cc93020
Merge branch 'united' into sp
2021-10-23 11:45:03 -04:00
henk717
7b73d7cfdd
Single Line Mode
...
Adds Single Line mode, optimized for things like chatbot testing and other cases where you want to have control over what happens after a paragraph.
This can also be used as a foundation for a chatbot optimized interface mode.
2021-10-23 17:30:48 +02:00
Gnome Ann
1f449a9dda
Soft prompt support (6B Colabs not supported yet)
2021-10-22 14:18:10 -04:00
Gnome Ann
3501f03153
Create settings directory if it doesn't exist when using InferKit/OAI
2021-10-21 23:33:32 -04:00
henk717
fa0f8af1d6
Merge branch 'KoboldAI:main' into united
2021-10-15 08:23:06 +02:00
henk717
9513240dfb
Version bump
...
Since VE fixed important things in the editor i want users to be able to see this easier
2021-10-15 08:22:32 +02:00
henk717
c854a62549
Clarified GPU Layers
...
breakmodel_layers and layers is confusing, changed the new method to breakmodel_gpulayers. The old one should no longer be used by people, but since it works in reverse we leave it in so scripts don't break.
2021-10-06 18:55:01 +02:00
henk717
bd063f7590
Merge pull request #19 from VE-FORBRYDERNE/multi-gpu
...
Multiple GPU support
2021-10-06 18:50:58 +02:00
henk717
82c7eaffb5
Merge branch 'KoboldAI:main' into united
2021-10-06 00:26:08 +02:00
henk717
8893916fef
Don't always submit prompt by default
...
Feedback from users is that its better to not always submit the prompt, this is consistent with the randomly generated stories. You can always toggle it on if you need this for coherency. This change does not override existing user settings.
2021-10-06 00:25:05 +02:00
Gnome Ann
aa59f8b4b2
Fix CPU layers not displaying correctly when using --layers
2021-10-05 11:29:47 -04:00
Gnome Ann
91352ea9f1
Change the command line flags for breakmodel
2021-10-05 11:22:09 -04:00
Gnome Ann
a1e4405aa6
Automatically use breakmodel instead of GPU-only where supported
...
There's really no reason to use GPU-only mode if breakmodel is supported
because breakmodel can run in GPU-only mode too.
2021-10-05 10:36:51 -04:00
Gnome Ann
fb90a7ed17
Change the help text for breakmodel to be more helpful
2021-10-05 10:31:28 -04:00
Gnome Ann
231621e7c2
Use AutoModelForCausalLM for custom models with a model_type
2021-10-05 09:45:12 -04:00
Gnome Ann
a283d34b27
Multiple GPU support
2021-10-05 09:38:57 -04:00
Gnome Ann
a42b580027
Merge branch 'united' into multi-gpu
2021-10-02 11:44:26 -04:00
henk717
dab58d8393
Merge branch 'KoboldAI:main' into united
2021-09-29 17:05:06 +02:00
Gnome Ann
a179bb2820
Bump version number to 1.16.2
2021-09-28 21:50:33 -04:00
Gnome Ann
e6cd28243e
Scroll to the bottom of the gamescreen after retrying
2021-09-28 21:34:36 -04:00
Gnome Ann
bb323152d7
Disable vars.recentedit again
2021-09-28 21:24:08 -04:00
Gnome Ann
2b89bcb16e
Fix random story generator
2021-09-28 21:04:26 -04:00
Gnome Ann
af93c96c0f
Submit Action mode action in Story mode if action is empty
2021-09-28 19:50:00 -04:00
Gnome Ann
9ab1d182ac
Guard against empty prompts
2021-09-28 19:48:43 -04:00
henk717
da55ed3b49
Merge branch 'KoboldAI:main' into united
2021-09-28 10:41:01 +02:00
Gnome Ann
03c1a3ebf9
Put vars.recentedit = True in deleterequest() for consistency
2021-09-28 01:10:20 -04:00
Gnome Ann
97e1760af5
Prevent retry from popping chunks after edit/delete
2021-09-28 01:07:11 -04:00
Gnome Ann
231290608d
Do a better job of preventing editing of text when required
2021-09-28 00:48:37 -04:00
Gnome Ann
13b81c7523
Prevent the user from deleting the prompt
2021-09-27 22:21:14 -04:00
henk717
01b30b315f
Merge branch 'KoboldAI:main' into united
2021-09-28 02:31:20 +02:00