57344935f6
--model without breakmodel disables bmsupported
...
Last commit it only did a warning, now it will turn bmsupported off so that the GPU routine is used.
2022-01-30 17:16:35 +01:00
f0c0a990ea
NoBreakmodel variable
...
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).
In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
2022-01-30 17:06:15 +01:00
5b5a479f29
Threading + Memory Sizes
...
Polish effort to suppress a warning and list more accurate VRAM as tested with the full 2048 max tokens.
2022-01-30 13:56:25 +01:00
fca7f8659f
Badwords unification
...
TPU's no longer use hardcoded badwords but instead use the var
2022-01-29 18:09:53 +01:00
f9f25c01e4
HTML escape the last commit
...
</s> didn't work, needed to be HTML escaped (Thanks for the tip VE!)
2022-01-28 19:21:05 +01:00
be0e57185f
Improved Model Support
...
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.
More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
2022-01-28 18:03:30 +01:00
1470b1666d
Fixed single gen redo
2022-01-27 20:17:13 -05:00
2278b7c103
Changed behavior of redo if there is only 1 option to just select it
2022-01-26 21:07:55 -05:00
06bbe429d9
Bug fix for redo/pinning persisting over new game requests
2022-01-26 21:02:36 -05:00
b0f1bdf2fd
Merge branch 'henk717:united' into united
2022-01-26 11:27:12 -05:00
987e78f980
More loading fixes
...
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
2022-01-25 06:39:23 +01:00
3f18888eec
Repetition penalty slope and range
2022-01-24 15:30:38 -05:00
bd0732fbd6
Fix for redo with options.
...
Added debug menu
2022-01-24 12:54:44 -05:00
47ec22873d
bug-fix if settings directory is a symlink.
2022-01-22 21:43:32 -05:00
f54f46b068
bugfix for metadata saving
2022-01-22 20:30:14 -05:00
0846d57368
0.17 polish
2022-01-23 01:05:09 +01:00
bdd358f40f
Merge branch 'united' of https://github.com/henk717/KoboldAI into henk717-united
2022-01-22 17:57:33 -05:00
c9999b6388
Merge pull request #70 from VE-FORBRYDERNE/patch
...
Don't throw an error in `update_story_chunk` if you try to edit a nonexistent chunk
2022-01-22 23:24:34 +01:00
4e7440804c
Merge pull request #69 from VE-FORBRYDERNE/lua
...
Lua compatibility enhancements
2022-01-22 23:23:47 +01:00
f79db7059a
Fall back to old json load
...
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.
2022-01-22 23:21:19 +01:00
9df758c1f4
added quiet option to suppress any story text from showing in the console (reduce logs when running in a docker container)
2022-01-22 15:30:56 -05:00
12e7b6d10b
Added --share command line parameter so we can set host=0.0.0.0 on local instances without editing code
...
moved save location of downloaded models to models/XXXXXX so we can more easily set this as a volume in docker
2022-01-22 14:47:28 -05:00
bf2b02d366
Don't error in update_story_chunk
if chunk index doesn't exist
2022-01-21 21:19:32 -05:00
2010e7b9bc
Added saveas option for saving without metadata information
...
Fixed redo on an empty story erroring
Fixed redo when you're at the current end of a chain causing an error
2022-01-21 19:02:56 -05:00
fab0913270
Call setgamesaved(False)
in update_story_chunk
and remove_story_chunk
2022-01-21 16:39:51 -05:00
d31fb278ce
Working redo and pin options
2022-01-21 15:30:37 -05:00
fcaacf636d
Merge branch 'henk717:united' into united
2022-01-21 07:40:25 -05:00
03d54364f4
Initial commit of the actions metadata variable population
2022-01-20 15:18:43 -05:00
72a7aac2c7
Sync memory properly after random game request
2022-01-20 15:14:55 -05:00
dffd00265b
Added autosave feature. When action is submitted it will save if the save setting is on and if the filename is set.
2022-01-20 07:46:34 -05:00
9532b56cb8
Universal Model Settings
...
No longer depends on a local config file enabling the configuration to work in --colab mode.
2022-01-20 10:11:11 +01:00
c703729f0b
Set eventlet threadpool size back to 1
2022-01-20 02:10:57 -05:00
f0c39c004a
Deleting world info entries should call setgamesaved(False)
2022-01-18 19:36:20 -05:00
4ca06ebcf3
Merge pull request #65 from VE-FORBRYDERNE/sp
...
Show author and SP length in soft prompt menu
2022-01-18 23:51:02 +01:00
1e0f9ada08
Add adventure 2.7B
...
Its on Huggingface now, so lets add it to the menu!
2022-01-18 23:50:21 +01:00
3018322963
Detect and show properly when story is unsaved
2022-01-18 17:20:45 -05:00
1951ccd2ce
Show author and SP length in soft prompt menu
2022-01-18 16:30:09 -05:00
4da1a2d247
Prevent tokenizer from taking extra time the first time it's used
2022-01-17 22:55:25 -05:00
703c092577
Fix settings callback, and genout.shape[-1]
in tpumtjgenerate()
2022-01-17 14:52:29 -05:00
3ba0e3f9d9
Dynamic TPU backend should support dynamic warpers and abort button
2022-01-17 14:10:32 -05:00
6502af086f
Use vars._actions
in tpumtjgenerate
and its callbacks
2022-01-17 13:24:11 -05:00
45bfde8d5d
generated_cols
needs to be set properly by TPU static backend
2022-01-17 13:19:57 -05:00
9594b2db1c
Fix soft prompt length calculation in calcsubmitbudget()
...
In TPU instances, `vars.sp.shape[0]` is not always the actual number of
tokens in the soft prompt. We have to use `vars.sp_length` to get an
accurate token count.
2022-01-17 13:17:20 -05:00
74f79081d1
Use vars.model_type
to check for GPT-2 models
2022-01-17 13:13:54 -05:00
54a587d6a3
Show confirmation dialog when navigating away from UI window
2022-01-17 12:11:06 -05:00
1627afa8c5
Merge branch 'united' into patch
2022-01-17 10:44:34 -05:00
33f9f2dc82
Show message when TPU backend is compiling
2022-01-16 21:09:10 -05:00
03b16ed920
Merge branch 'united' into patch
2022-01-16 00:36:55 -05:00
4f0c8b6552
Merge branch 'united' into xmap
2022-01-15 23:32:12 -05:00
f4eb896a69
Use original TPU backend if possible
2022-01-15 23:31:07 -05:00