Commit Graph

936 Commits

Author SHA1 Message Date
henk717 8466068267 Don't save newlinemode
On second thought, it is probably better to not save this. Advanced users can add this themselves and that way newer versions of the model can override it if redownloaded.
2022-01-31 18:41:23 +01:00
henk717 729be62821 </s> new line mode
Needed for Fairseq and XGLM models that do not understand the regular \n .
2022-01-31 18:39:34 +01:00
henk717 44d49ea732 Remove Huggingface Repo
Huggingface's repo is further behind than conda-forge so we will no longer offer it in the installer. The more is loaded from conda-forge the better. The same transformers package will still be installed, but a newer one from conda-forge is now guaranteed.
2022-01-31 16:21:10 +01:00
henk717 03433810f1 KML improvements
Don't parse > since that has a different meaning for us, also whitelisting a few more markdown tags so lists work.
2022-01-30 20:07:47 +01:00
henk717 a484244392 Welcome Message API
Allows model creators to customize the welcome message using Markdown and Limited HTML

Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
2022-01-30 19:47:30 +01:00
henk717 ddfa21e6dd Breakmodel Fixes
Multiple old references and one mistake in my last commit fixed
2022-01-30 17:40:43 +01:00
henk717 57344935f6
--model without breakmodel disables bmsupported
Last commit it only did a warning, now it will turn bmsupported off so that the GPU routine is used.
2022-01-30 17:16:35 +01:00
henk717 f0c0a990ea NoBreakmodel variable
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).

In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
2022-01-30 17:06:15 +01:00
henk717 5b5a479f29 Threading + Memory Sizes
Polish effort to suppress a warning and list more accurate VRAM as tested with the full 2048 max tokens.
2022-01-30 13:56:25 +01:00
henk717 fca7f8659f Badwords unification
TPU's no longer use hardcoded badwords but instead use the var
2022-01-29 18:09:53 +01:00
henk717 4a4fa4ca29 Update readme.md 2022-01-29 12:45:15 +01:00
henk717 f9f25c01e4 HTML escape the last commit
</s> didn't work, needed to be HTML escaped (Thanks for the tip VE!)
2022-01-28 19:21:05 +01:00
henk717 be0e57185f Improved Model Support
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.

More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
2022-01-28 18:03:30 +01:00
ebolam 1470b1666d Fixed single gen redo 2022-01-27 20:17:13 -05:00
ebolam ab5d3b4255 Docker file fix 2022-01-26 21:14:10 -05:00
ebolam 2278b7c103 Changed behavior of redo if there is only 1 option to just select it 2022-01-26 21:07:55 -05:00
ebolam 06bbe429d9 Bug fix for redo/pinning persisting over new game requests 2022-01-26 21:02:36 -05:00
ebolam a27c441cdf Updated base image again to only have transformers change between the images 2022-01-26 15:35:36 -05:00
ebolam d2b15e2a6e Updated dockerfiles to create images for docker hub that are per-compiled 2022-01-26 11:35:58 -05:00
ebolam b0f1bdf2fd
Merge branch 'henk717:united' into united 2022-01-26 11:27:12 -05:00
henk717 9356573ac9 Merge branch 'united' of https://github.com/henk717/KoboldAI into united 2022-01-25 06:39:54 +01:00
henk717 987e78f980 More loading fixes
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
2022-01-25 06:39:23 +01:00
henk717 2d7f39247d TPU descriptions 2022-01-25 06:22:32 +01:00
henk717 2bb263c65d Reordering Settings
More settings reordering so similar settings are on the same rows now that we have more settings for the repetition penalty. Amount to generate is now top left so some muscle memory may be lost with the temp. But the settings that control AI randomness are on the same row now, and repetition related settings are next to each other as well.
2022-01-25 06:10:39 +01:00
henk717 392c59d48b
Merge pull request #72 from VE-FORBRYDERNE/rep-pen
Repetition penalty slope and range
2022-01-25 05:39:09 +01:00
Gnome Ann 3f18888eec Repetition penalty slope and range 2022-01-24 15:30:38 -05:00
ebolam a0100ff3cc Fixed error with redo action when a list of options is on screen sometimes causing the list to disappear entirely. 2022-01-24 15:15:45 -05:00
ebolam bd0732fbd6 Fix for redo with options.
Added debug menu
2022-01-24 12:54:44 -05:00
henk717 85cb6342e2 Fix C1 2022-01-24 07:08:28 +01:00
ebolam 47ec22873d bug-fix if settings directory is a symlink. 2022-01-22 21:43:32 -05:00
ebolam d12a6a5620 added DockerFile for finetune 2022-01-22 20:38:43 -05:00
ebolam f54f46b068 bugfix for metadata saving 2022-01-22 20:30:14 -05:00
henk717 e69265cb4f Logits Viewer
Logits Viewer by VE
2022-01-23 01:09:41 +01:00
henk717 91077938c8 Update index.html 2022-01-23 01:06:52 +01:00
henk717 0846d57368 0.17 polish 2022-01-23 01:05:09 +01:00
ebolam 9355ae420d Merge branch 'henk717-united' into united 2022-01-22 17:57:51 -05:00
ebolam bdd358f40f Merge branch 'united' of https://github.com/henk717/KoboldAI into henk717-united 2022-01-22 17:57:33 -05:00
henk717 f9a34951cf
Merge pull request #71 from jojorne/patch-1
Display the options text as A.I. writes it
2022-01-22 23:24:45 +01:00
henk717 c9999b6388
Merge pull request #70 from VE-FORBRYDERNE/patch
Don't throw an error in `update_story_chunk` if you try to edit a nonexistent chunk
2022-01-22 23:24:34 +01:00
henk717 4e7440804c
Merge pull request #69 from VE-FORBRYDERNE/lua
Lua compatibility enhancements
2022-01-22 23:23:47 +01:00
henk717 f79db7059a Fall back to old json load
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.
2022-01-22 23:21:19 +01:00
ebolam 9df758c1f4 added quiet option to suppress any story text from showing in the console (reduce logs when running in a docker container) 2022-01-22 15:30:56 -05:00
jojorne 47dbddff78
Display the options text as A.I. writes it
When using `Gens Per Action`, display the options text as A.I. writes it. White space are preserved and break lines as necessary.
2022-01-22 17:10:35 -03:00
ebolam 7d76ecbf83 added models folder 2022-01-22 15:01:01 -05:00
ebolam 54d99490a9 Set the dockerfile to save the code in the image
set the transformer version to huggingface
added the default run command
2022-01-22 14:48:32 -05:00
ebolam 12e7b6d10b Added --share command line parameter so we can set host=0.0.0.0 on local instances without editing code
moved save location of downloaded models to models/XXXXXX so we can more easily set this as a volume in docker
2022-01-22 14:47:28 -05:00
ebolam 8e2fab8eb0 whops. Missed a } 2022-01-22 08:48:32 -05:00
Gnome Ann bf2b02d366 Don't error in `update_story_chunk` if chunk index doesn't exist 2022-01-21 21:19:32 -05:00
ebolam 2010e7b9bc Added saveas option for saving without metadata information
Fixed redo on an empty story erroring
Fixed redo when you're at the current end of a chain causing an error
2022-01-21 19:02:56 -05:00
Gnome Ann fab0913270 Call `setgamesaved(False)` in `update_story_chunk` and `remove_story_chunk` 2022-01-21 16:39:51 -05:00