7803fbb137
Fixed error in redo action when editing previous entries and/or editing right after a redo
2022-02-28 08:31:26 -05:00
13fe472264
Menu Polish
2022-02-28 02:47:15 +01:00
f628929401
Merge pull request #85 from VE-FORBRYDERNE/sp
...
Fix a bug with soft prompts when using transformers XGLM
2022-02-28 02:33:18 +01:00
4849a30d88
Merge pull request #84 from mrseeker/patch-3
...
Added KoboldAI/fairseq-dense-2.7B-Janeway
2022-02-28 02:33:07 +01:00
a466e13c00
Model List Support
2022-02-26 12:34:07 +01:00
a22d59e191
Fix a bug with soft prompts when using transformers XGLM
2022-02-25 12:35:23 -05:00
0a7376a711
Added KoboldAI/fairseq-dense-2.7B-Janeway
...
With pleasure I am introducing KoboldAI/fairseq-dense-2.7B-Janeway.
2022-02-24 09:00:56 +01:00
072ca87977
Load soft prompt at the end instead of inside loadsettings()
2022-02-23 21:15:08 -05:00
8120e4dfa2
Need to set vars.allowsp
to True before calling loadsettings()
2022-02-23 21:09:31 -05:00
c45ba497c9
Load settings earlier to avoid TPU badwords issues
2022-02-23 20:39:11 -05:00
ac59e55d62
Smaller optimizations
2022-02-24 01:14:26 +01:00
8e9d9faa97
Merge pull request #82 from VE-FORBRYDERNE/tpu-config
...
Allow TPU models to specify settings/config in config.json
2022-02-24 00:53:40 +01:00
ad10ac8871
Allow TPU models to specify settings/config in config.json
2022-02-23 18:22:18 -05:00
7de3311000
Fix sentencepiece model saving
2022-02-23 22:04:41 +01:00
fd7ba9f70e
Also check for Config in models/
2022-02-22 19:22:08 +01:00
4ace11f5b8
Merge pull request #80 from VE-FORBRYDERNE/xglm-position-ids
...
Temporary fix for XGLM positional embedding issues
2022-02-21 00:47:20 +01:00
300db651de
Open models folder by default
2022-02-21 00:46:18 +01:00
da10e2dc1d
Don't crash if XGLMSinusoidalPositionalEmbedding
doesn't exist
2022-02-20 17:41:00 -05:00
5dc4969173
Temporary fix for XGLM positional embedding issues
2022-02-20 14:17:24 -05:00
a63fa3b067
Prevent transformers XGLM from stopping generation on </s>
token
2022-02-19 23:15:16 -05:00
a47e93cee7
Seperate Low Memory Mode
...
In 1.16 we had significantly faster loading speeds because we did not do as much memory conservation, its time to give users the choice. If you want the original faster behavior and have the memory run KoboldAI as usual. Otherwise run play-lowmem.bat or aiserver.py with --lowmem. For colab this is still the default behavior to avoid breaking models that would otherwise load fine.
2022-02-18 16:21:28 +01:00
8e03f1c612
Merge branch 'KoboldAI:main' into united
2022-02-18 14:21:34 +01:00
f06acb59be
Add the Janeway model
...
New model released by Mr.Seeker
2022-02-18 14:18:41 +01:00
cba93e29d2
Update aiserver.py
2022-02-18 02:11:08 +01:00
76a6c124dd
Quiet on Colab
...
Makes the Colab mode also automatically activate the Quiet mode to improve privacy. We should no longer need this in the colab console thanks to the redo feature. Need something different for testing? Use --remote instead.
2022-02-18 02:07:40 +01:00
02246dfc4d
Remote play improvements
...
Change the proposed --share to --unblock to make it more apparent what this feature does. The feature unblocks the port from external access, but does not add remote play support. For remote play support without a proxy service I have added --host .
2022-02-18 01:08:12 +01:00
ec54bc9d9b
Fix typo in send_debug()
2022-02-12 20:11:35 -05:00
f682c1229a
Fix fairseq newline handling issues
2022-02-12 13:23:59 -05:00
633152ee84
Fixed Retry bug due to redo/pin code
2022-02-10 10:01:07 -05:00
586b989582
Redo bug fix
2022-02-06 18:53:24 -05:00
98609a8abc
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
2022-02-06 13:48:34 -05:00
80ae054cb5
Merge branch 'henk717:united' into united
2022-02-06 13:42:59 -05:00
9e17ea9636
Fixed model downloading problem where models were downloaded multiple times
2022-02-06 13:42:46 -05:00
c38108d818
Merge pull request #73 from VE-FORBRYDERNE/xglm-breakmodel
...
Breakmodel support for the fairseq models
2022-02-06 18:05:59 +01:00
02c7ca3e84
Merge branch 'henk717:united' into united
2022-02-03 08:11:06 -05:00
0684a221cd
Changed pin icon for re-dos to be a circular arrow that is not clickable to make it clear it is a redo action and cannot be cleared.
2022-02-03 08:08:43 -05:00
3ee63b28c5
Defaults and Downloads
...
Default settings for the new repetition penalty settings (Better suggestions very much welcome since broader community testing has not been done).
Updated the Readme with the link to the offline installer.
2022-02-03 13:13:26 +01:00
4904af6adc
Fix a mistake in the previous commit
2022-02-02 23:04:59 -05:00
78f52063c7
Fix XGLM soft prompts
2022-02-02 22:45:16 -05:00
e2d2ebcae6
upstream merge
2022-02-02 15:04:59 -05:00
d847d04605
Fix some typos in XGLM breakmodel
2022-02-01 16:00:46 -05:00
8e1169ea61
Enable vars.bmsupported
when using XGLM
2022-02-01 15:31:59 -05:00
e7f65cee09
XGLM breakmodel
2022-02-01 13:04:35 -05:00
c14e6fe5d2
Revert parralism
...
Testing is done, seems to cause issues in the order things happen with the interface.
2022-02-01 18:58:48 +01:00
d68a91ecd3
Save model values
...
Without saving these they get lost after someone saves. So saving them is more important than the model being able to override them after the fact.
2022-02-01 18:37:52 +01:00
b8e08cdd63
Enable Tokenizer Parralism
...
Has proven to be safe in my internal testing and does help with the interface lag at boot.
Enabling this so it can get wider testing.
2022-02-01 12:00:53 +01:00
ecd7b328ec
Further Polishing
...
Multiple smaller changes to get 1.17 in shape for its release.
2022-02-01 11:15:44 +01:00
36b6dcb641
Increase newlinemode compatibility
...
Ran into issues with other modes like chatmode and adventure, moved it further down the pipeline and converting </s> back to \n before processing additional formatting.
Still has an issue with the html formatting not working, but at least the AI works now.
2022-01-31 19:39:32 +01:00
90fd67fd16
Update aiserver.py
2022-01-31 19:06:02 +01:00
b69e3f86e1
Update aiserver.py
...
Removes a debug line
2022-01-31 18:57:47 +01:00