Commit Graph

507 Commits

Author SHA1 Message Date
henk717 b69e3f86e1 Update aiserver.py
Removes a debug line
2022-01-31 18:57:47 +01:00
henk717 8466068267 Don't save newlinemode
On second thought, it is probably better to not save this. Advanced users can add this themselves and that way newer versions of the model can override it if redownloaded.
2022-01-31 18:41:23 +01:00
henk717 729be62821 </s> new line mode
Needed for Fairseq and XGLM models that do not understand the regular \n .
2022-01-31 18:39:34 +01:00
henk717 03433810f1 KML improvements
Don't parse > since that has a different meaning for us, also whitelisting a few more markdown tags so lists work.
2022-01-30 20:07:47 +01:00
henk717 a484244392 Welcome Message API
Allows model creators to customize the welcome message using Markdown and Limited HTML

Existing United users need to run install_requirements..bat again, you can leave the existing dependencies intact.
2022-01-30 19:47:30 +01:00
henk717 ddfa21e6dd Breakmodel Fixes
Multiple old references and one mistake in my last commit fixed
2022-01-30 17:40:43 +01:00
henk717 57344935f6
--model without breakmodel disables bmsupported
Last commit it only did a warning, now it will turn bmsupported off so that the GPU routine is used.
2022-01-30 17:16:35 +01:00
henk717 f0c0a990ea NoBreakmodel variable
Adds a Nobreakmodel var that allows Breakmodel to be turned off. This can be done trough commandline or a model config (In case Neo is used by the models config without it being a true Neo model that is compatible with breakmodel).

In addition I removed the args.colab check for breakmodel support and instead make args.colab activate nobreakmodel. And I have added a new check so that breakmodel is not even attempted if you do not specify the layers but do launch a model from the command line.
2022-01-30 17:06:15 +01:00
henk717 5b5a479f29 Threading + Memory Sizes
Polish effort to suppress a warning and list more accurate VRAM as tested with the full 2048 max tokens.
2022-01-30 13:56:25 +01:00
henk717 fca7f8659f Badwords unification
TPU's no longer use hardcoded badwords but instead use the var
2022-01-29 18:09:53 +01:00
henk717 f9f25c01e4 HTML escape the last commit
</s> didn't work, needed to be HTML escaped (Thanks for the tip VE!)
2022-01-28 19:21:05 +01:00
henk717 be0e57185f Improved Model Support
Changed the model VRAM requirements to what you'd need to comfortably run the model rather than barely (Like with the manual). Will probably revise this in a later commit.

More importantly, it now supports models that use </s> which will be required to support XGLM and Fairseq models.
2022-01-28 18:03:30 +01:00
ebolam 1470b1666d Fixed single gen redo 2022-01-27 20:17:13 -05:00
ebolam 2278b7c103 Changed behavior of redo if there is only 1 option to just select it 2022-01-26 21:07:55 -05:00
ebolam 06bbe429d9 Bug fix for redo/pinning persisting over new game requests 2022-01-26 21:02:36 -05:00
ebolam b0f1bdf2fd
Merge branch 'henk717:united' into united 2022-01-26 11:27:12 -05:00
henk717 987e78f980 More loading fixes
My last attempt at fixing this caused GPT2 to break, since the other fix is an edge case we assume that the GPT2 method should be used, and if that fails we try the other one to catch rare errors with bad model config's.
2022-01-25 06:39:23 +01:00
Gnome Ann 3f18888eec Repetition penalty slope and range 2022-01-24 15:30:38 -05:00
ebolam bd0732fbd6 Fix for redo with options.
Added debug menu
2022-01-24 12:54:44 -05:00
ebolam 47ec22873d bug-fix if settings directory is a symlink. 2022-01-22 21:43:32 -05:00
ebolam f54f46b068 bugfix for metadata saving 2022-01-22 20:30:14 -05:00
henk717 0846d57368 0.17 polish 2022-01-23 01:05:09 +01:00
ebolam bdd358f40f Merge branch 'united' of https://github.com/henk717/KoboldAI into henk717-united 2022-01-22 17:57:33 -05:00
henk717 c9999b6388
Merge pull request #70 from VE-FORBRYDERNE/patch
Don't throw an error in `update_story_chunk` if you try to edit a nonexistent chunk
2022-01-22 23:24:34 +01:00
henk717 4e7440804c
Merge pull request #69 from VE-FORBRYDERNE/lua
Lua compatibility enhancements
2022-01-22 23:23:47 +01:00
henk717 f79db7059a Fall back to old json load
Turns out model_config does not work on models that have no model_type defined. In case this happens we now fall back to the old .json loading method. This will not work in --colab mode if its not already a local model, but since almost all modern models define a model type and to my knowledge all models on huggingface do that should not be an issue. If it is we can always ask the model creator to either update it, distribute the model differently or load that model with --remote instead of --colab.
2022-01-22 23:21:19 +01:00
ebolam 9df758c1f4 added quiet option to suppress any story text from showing in the console (reduce logs when running in a docker container) 2022-01-22 15:30:56 -05:00
ebolam 12e7b6d10b Added --share command line parameter so we can set host=0.0.0.0 on local instances without editing code
moved save location of downloaded models to models/XXXXXX so we can more easily set this as a volume in docker
2022-01-22 14:47:28 -05:00
Gnome Ann bf2b02d366 Don't error in `update_story_chunk` if chunk index doesn't exist 2022-01-21 21:19:32 -05:00
ebolam 2010e7b9bc Added saveas option for saving without metadata information
Fixed redo on an empty story erroring
Fixed redo when you're at the current end of a chain causing an error
2022-01-21 19:02:56 -05:00
Gnome Ann fab0913270 Call `setgamesaved(False)` in `update_story_chunk` and `remove_story_chunk` 2022-01-21 16:39:51 -05:00
ebolam d31fb278ce Working redo and pin options 2022-01-21 15:30:37 -05:00
ebolam fcaacf636d
Merge branch 'henk717:united' into united 2022-01-21 07:40:25 -05:00
Ben Fox 03d54364f4 Initial commit of the actions metadata variable population 2022-01-20 15:18:43 -05:00
Gnome Ann 72a7aac2c7 Sync memory properly after random game request 2022-01-20 15:14:55 -05:00
ebolam dffd00265b Added autosave feature. When action is submitted it will save if the save setting is on and if the filename is set. 2022-01-20 07:46:34 -05:00
henk717 9532b56cb8 Universal Model Settings
No longer depends on a local config file enabling the configuration to work in --colab mode.
2022-01-20 10:11:11 +01:00
Gnome Ann c703729f0b Set eventlet threadpool size back to 1 2022-01-20 02:10:57 -05:00
Gnome Ann f0c39c004a Deleting world info entries should call `setgamesaved(False)` 2022-01-18 19:36:20 -05:00
henk717 4ca06ebcf3
Merge pull request #65 from VE-FORBRYDERNE/sp
Show author and SP length in soft prompt menu
2022-01-18 23:51:02 +01:00
henk717 1e0f9ada08
Add adventure 2.7B
Its on Huggingface now, so lets add it to the menu!
2022-01-18 23:50:21 +01:00
Gnome Ann 3018322963 Detect and show properly when story is unsaved 2022-01-18 17:20:45 -05:00
Gnome Ann 1951ccd2ce Show author and SP length in soft prompt menu 2022-01-18 16:30:09 -05:00
Gnome Ann 4da1a2d247 Prevent tokenizer from taking extra time the first time it's used 2022-01-17 22:55:25 -05:00
Gnome Ann 703c092577 Fix settings callback, and `genout.shape[-1]` in `tpumtjgenerate()` 2022-01-17 14:52:29 -05:00
Gnome Ann 3ba0e3f9d9 Dynamic TPU backend should support dynamic warpers and abort button 2022-01-17 14:10:32 -05:00
Gnome Ann 6502af086f Use `vars._actions` in `tpumtjgenerate` and its callbacks 2022-01-17 13:24:11 -05:00
Gnome Ann 45bfde8d5d `generated_cols` needs to be set properly by TPU static backend 2022-01-17 13:19:57 -05:00
Gnome Ann 9594b2db1c Fix soft prompt length calculation in `calcsubmitbudget()`
In TPU instances, `vars.sp.shape[0]` is not always the actual number of
tokens in the soft prompt. We have to use `vars.sp_length` to get an
accurate token count.
2022-01-17 13:17:20 -05:00
Gnome Ann 74f79081d1 Use `vars.model_type` to check for GPT-2 models 2022-01-17 13:13:54 -05:00
Gnome Ann 54a587d6a3 Show confirmation dialog when navigating away from UI window 2022-01-17 12:11:06 -05:00
Gnome Ann 1627afa8c5 Merge branch 'united' into patch 2022-01-17 10:44:34 -05:00
Gnome Ann 33f9f2dc82 Show message when TPU backend is compiling 2022-01-16 21:09:10 -05:00
Gnome Ann 03b16ed920 Merge branch 'united' into patch 2022-01-16 00:36:55 -05:00
Gnome Ann 4f0c8b6552 Merge branch 'united' into xmap 2022-01-15 23:32:12 -05:00
Gnome Ann f4eb896a69 Use original TPU backend if possible 2022-01-15 23:31:07 -05:00
henk717 9802d041aa Colab Optimizations
Breakmodel is useless on Colab, so for the sake of efficiency if --colab is present we will always assume a model is incompatible. The same applies to the conversion, colab's are discarded so converting the model to a .bin file only wastes time since the HDD isn't fast. Finally we automatically set all the useful variables for Colab, so that in the future this can be removed from ckds and other scripts.

Lastly ckds has been adapted not to copy the examples folder and to add the new --colab parameter.

Local players are much better off running the old --remote command.
2022-01-16 00:56:03 +01:00
henk717 9bcc24c07e
Merge pull request #58 from VE-FORBRYDERNE/xmap
Dynamic TPU backend xmaps
2022-01-15 16:20:58 +01:00
Gnome Ann 877fa39b8a Change TPU regeneration indicator message 2022-01-14 23:21:27 -05:00
Gnome Ann bdfde33e8a Add an indicator for when dynamic WI scan is triggered in TPU Colabs 2022-01-14 23:13:55 -05:00
Gnome Ann e0fdce2cc6 Fix TPU generation modifier 2022-01-14 23:00:06 -05:00
Gnome Ann 932c393d6a Add TPU support for dynamic WI scan and generation modifiers 2022-01-14 21:39:02 -05:00
Gnome Ann cf9a4b7e6b Fix typos in error messages 2022-01-13 22:33:55 -05:00
henk717 53b91c6406 Small changes 2022-01-14 02:03:46 +01:00
Gnome Ann a3d6dc93e8 xmaps for moving things onto TPU 2022-01-12 21:45:30 -05:00
Gnome Ann f0b5cc137f Merge branch 'united' into patch 2022-01-12 19:50:01 -05:00
henk717 49e2bcab1a Allow unique chatnames in multiplayer
No longer update the chatname outside of the config, this will not effect singleplayer tab at all, but it will allow people in multiplayer to chat with their own names.
2022-01-11 21:31:44 +01:00
henk717 3f88b4f840 Server clarification
To prevent confusion with users who have not used KoboldAI for a while, or who are following old tutorials I have added a disclaimer that informs people that most Colab links should not be used with this feature and instead opened in the browser.
2022-01-11 00:35:20 +01:00
henk717 d2947bd1cc Small model description update 2022-01-11 00:29:35 +01:00
Gnome Ann 43586c8f60 Fix some of the logic for generation aborting 2022-01-10 17:09:47 -05:00
Gnome Ann 902b6b0cee Always save all world info entries 2022-01-10 16:36:36 -05:00
Gnome Ann f718fdf65e Allow pressing send button again to stop generation 2022-01-10 16:36:15 -05:00
Gnome Ann c84d864021 Fix a bug with dynamic WI scan when using a soft prompt
The problem was that when a soft prompt is being used, the dynamic
scanning criteria searches a different set of tokens for world info
keys than the `_generate()` function, which results in generation loops
when a world info key appears in the former set of tokens but not the
latter.
2022-01-10 15:52:49 -05:00
Gnome Ann fbc3a73c0f Compile TPU backend in background 2022-01-07 13:47:21 -05:00
Gnome Ann 01479c29ea Fix the type hint for `bridged_kwarg` decorator 2022-01-04 20:48:34 -05:00
Gnome Ann fc6caa0df0 Easier method of adding kwargs to bridged in aiserver.py 2022-01-04 19:36:21 -05:00
Gnome Ann fbf5062074 Add option to `compute_context()` to not scan story 2022-01-04 19:26:59 -05:00
Gnome Ann 6edc6387f4 Accept command line arguments in `KOBOLDAI_ARGS` environment var
So that you can use gunicorn or whatever with command-line arguments by
passing the arguments in an environment variable.
2022-01-04 17:11:14 -05:00
Gnome Ann aa86c6001c `--breakmodel_gpublocks` should handle -1 properly now 2022-01-04 14:43:37 -05:00
Gnome Ann e20452ddd8 Retrying random story generation now also remembers memory 2022-01-04 14:40:10 -05:00
Gnome Ann f46ebd2359 Always pass 1.1 as repetition penalty to generator
The `dynamic_processor_wrap` makes it so that the repetition penalty is
read directly from `vars`, but this only works if the initial repetition
sent to `generator` is not equal to 1. So we are now forcing the initial
repetition penalty to be something other than 1.
2022-01-04 14:18:58 -05:00
Gnome Ann 63bb76b073 Make sure `vars.wifolders_u` is set up properly on loading a save 2022-01-04 14:13:36 -05:00
Gnome Ann b88d49e359 Make all WI commands use UIDs instead of nums 2021-12-31 21:22:51 -05:00
Gnome Ann ccfafe4f0a Lua API fixes for deleting/editing story chunks 2021-12-31 18:28:03 -05:00
Gnome Ann 7241188408 Make sure tokenizer is initialized when used in read-only mode 2021-12-31 17:13:11 -05:00
henk717 796c71b7f7 ANTemplate in Model Configs
This commit exposes antemplates to the model config, this lets authors specify what kind of authors notes template they would like to use for their model. Users can still change it if they desire.
2021-12-31 00:11:18 +01:00
henk717 455dbd503b
Merge pull request #52 from VE-FORBRYDERNE/memory
Random story persist, author's note template and changes to behaviour when not connected to server
2021-12-30 23:57:55 +01:00
henk717 557d062381 Finetuneanon Models
Uploaded with permission, so now Finetuneanon's models can be added to the main menu
2021-12-30 14:16:04 +01:00
Gnome Ann 4d06ebb45a Consistent capitalization of "Author's note" 2021-12-30 01:48:25 -05:00
Gnome Ann de8a5046df Make sure we don't keep the trimmed memory in `randomGameRequest()` 2021-12-30 01:45:27 -05:00
Gnome Ann 276f24029e Author's Note Template 2021-12-29 23:43:36 -05:00
Gnome Ann 7573f64bf2 Add Memory box to Random Story dialog and "Random Story Persist" 2021-12-29 23:15:59 -05:00
Gnome Ann 8e2e3baed5 Fix AI output text flash showing up on wrong chunk 2021-12-29 14:23:22 -05:00
henk717 7a4834b8d0 Chatname Fix
Sends the chatname to the client
2021-12-27 18:52:06 +01:00
henk717 88f6e8ca38 Chatmode improvements
Blank lines appear often in chatmode so it is best played with blank line removal turned on, this is now forced. Its not compatible with Adventure mode, so they now turn each other off.
2021-12-27 13:32:25 +01:00
henk717 bbd68020a5
Merge pull request #50 from VE-FORBRYDERNE/potluck
Chat mode GUI, and Lua and random story generator bug fixes
2021-12-27 11:20:20 +01:00
Gnome Ann a4087b93e9 Fix random story retry, for real this time 2021-12-26 22:51:07 -05:00
Gnome Ann 1189781eac Show a text box for chat name when Chat Mode is enabled 2021-12-26 22:21:58 -05:00
henk717 5b22b0f344 Update aiserver.py 2021-12-27 02:44:36 +01:00
henk717 4d7f222758 Update aiserver.py 2021-12-27 02:32:59 +01:00
henk717 6d1bf76ef1 Path Fixes
Fixes the tokenizer cache being hit when we already have a local model
2021-12-27 01:56:59 +01:00
Gnome Ann 9288a3de2f Allow retry button to regenerate random story 2021-12-26 19:52:56 -05:00
Gnome Ann 1ff563ebda Fix random story generator when No Prompt Gen is enabled 2021-12-26 19:40:20 -05:00
henk717 1a64f8bdc4 More Models
Added more models in the menu, all the popular community models are now easily accessible. I also re-ordered the menu from large to small to have it make a bit more sense.
2021-12-27 01:02:05 +01:00
Gnome Ann 8742453f95 Add safeguards for token budget and text formatting
* Error messages are now shown when memory, author's note, etc. exceeds
  budget by itself
* Formatting options no longer break if there are empty chunks in the
  story (although there shouldn't be any in the first place)
* Number of generated tokens is now kept track of from Python
2021-12-26 18:29:54 -05:00
henk717 261e8b67dc Windows Color Workaround
Different approach that activates this in windows, hopefully now Linux compatible.
2021-12-26 22:46:15 +01:00
henk717 1107a1386d Enable Terminal Colors
Enable Windows Terminal color support based on feedback from Jojorne. If this gives issues on Linux we can move it to play.bat instead.
2021-12-26 22:22:24 +01:00
Gnome Ann 32a0d7c453 More Lua API fixes
* Removed `vars.model_orig`
* `requirex()` in bridge.lua now maintains a separate module cache for each
  userscript instead of using the same cache for all userscripts
* `vars.lua_deleted` and `vars.lua_edited` are now erased right before running
  the input modifiers instead of right before each time the generation modifiers
  are run
2021-12-26 12:49:28 -05:00
henk717 b9729749ba Update aiserver.py 2021-12-26 02:01:57 +01:00
henk717 ddd9bded30 Store chatname
Also save the chatname in the settings for later re-use by the user.
2021-12-26 01:55:27 +01:00
henk717 d234f67a90 Chat Mode
The Initial commit for Chat Mode, the nickname part of the UI is missing other than that it should be fully functional. To use Chat Mode effectively you first input a small dialogue (Can be around 6 lines 3 of your own inputs and 3 of the character) formatted as Name : it will then automate the actions needed to chat properly. During this mode single line mode is forced on, and Trim Incomplete Sentences is forced off.
2021-12-26 01:51:32 +01:00
henk717 14e5fcd355 AutoTokenizer 2021-12-25 00:48:12 +01:00
henk717 e1cd34268b AutoTokenizer
Futureproofing for future tokenizers, for now this is not needed since everything uses GPT2. But when that changes we want to be prepared. Not all models have a proper tokenizer config, so if we can't find one we fall back to GPT2.
2021-12-25 00:44:26 +01:00
henk717 00a0cea077 Update aiserver.py 2021-12-24 23:15:01 +01:00
henk717 1d3370c995 Community model
First batch, will be more, we will also need to update the other VRAM display's with the changes that have happened. Will happen depending on how the 8-bit stuff goes.
2021-12-24 23:12:14 +01:00
henk717 6952938e88 Update aiserver.py
Crash fix
2021-12-24 19:57:08 +01:00
Gnome Ann b2def30d9d Update Lua modeltype and model API 2021-12-23 16:35:52 -05:00
Gnome Ann c305076cf3 Fix the behaviour of the `softprompt` setting 2021-12-23 16:14:09 -05:00
Gnome Ann 00f611b207 Save softprompt filename in settings 2021-12-23 13:02:11 -05:00
Gnome Ann 924c48a6d7 Merge branch 'united' into gui-and-scripting 2021-12-23 13:02:01 -05:00
henk717 cae0f279e2 Restore Lowmem
Accidentally got replaced in one of my test runs
2021-12-23 18:50:01 +01:00
henk717 25a6e489c1 Remove Replace from Huggingface
Accidentally ended up in the wrong section, for downloads we do not replace anything only afterwards.
2021-12-23 17:27:09 +01:00
henk717 e7aa92cd86 Update aiserver.py 2021-12-23 17:12:42 +01:00
henk717 9d4113955f Replace NeoCustom
NeoCustom is now obsolete beyond the file selection and the CLI. So after the CLI we adapt the input to a generic model and then use the improved generic routine to handle it. This saves duplicate efforts of maintaining an almost identical routine now that models are handled by their type and not their name.
2021-12-23 17:10:02 +01:00
henk717 be351e384d Path loading improvements
This fixes a few scenario's of my commit yesterday, models that have a / are now first loaded from the corrected directory if it exists before we fall back to its original name to make sure it loads the config from the correct location. Cache dir fixes and a improved routine for the path loaded models that mimics the NeoCustom option fixing models that have no model_type specified. Because GPT2 doesn't work well with this option and should exclusively be used with the GPT2Custom and GPT-J models should have a model_type we assume its a Neo model when not specified.
2021-12-23 14:40:35 +01:00
Gnome Ann 8452940597 Merge branch 'united' into gui-and-scripting 2021-12-23 00:18:11 -05:00
Gnome Ann 2a4d7448be Make `dynamic_processor_wrap` execute warper conditionally
The top-k warper doesn't work properly with an argument of 0, so there
is now the ability to not execute the warper if a condition is not met.
2021-12-22 23:46:25 -05:00
Gnome Ann 7e06c25011 Display icons for active userscripts and softprompts
Also fixes the userscript menu so that the active userscripts preserve
the previously selected order as was originally intended.
2021-12-22 23:33:27 -05:00
henk717 a2d8347939 Replace model path differently
The path correction was applied to soon and broke online loading, applying the replace where it is relevant instead.
2021-12-23 03:05:53 +01:00
henk717 4ff1a6e940 Model Type support
Automatically detect or assume the model type so we do not have to hardcode all the different models people might use. This almost makes the behavior of --model identical to the NeoCustom behavior as far as the CLI is concerned. But only if the model_type is defined in the models config file.
2021-12-23 02:50:06 +01:00
henk717 2d7a00525e Path fix
In my last commit it didn't compensate the file location properly, this is now fixed.
2021-12-23 01:47:05 +01:00
henk717 81120a0524 Compatibility Fixes
Rather than coding a vars.custmodpath or vars.model in all the other parts of the code I opted to just set vars.custmodpath instead to make the behavior more consistent now that it always loads from the same location.
2021-12-23 00:36:08 +01:00
Gnome Ann c549ea04a9 Always use all logit warpers
Now that the logit warper parameters can be changed mid-generation by
generation modifiers, the logit warpers have to be always on.
2021-12-22 17:29:07 -05:00
Gnome Ann 1e1b45d47a Add support for multiple library paths in bridge.lua 2021-12-22 14:24:31 -05:00
Gnome Ann fc04ff3a08 World info folders can now be collapsed by clicking on the folder icon 2021-12-22 13:12:35 -05:00
Gnome Ann d538782b1e Add individual configuration files for userscripts 2021-12-22 02:59:31 -05:00
Gnome Ann 380b54167a Make transformers warpers dynamically update their parameters
So that if you change, e.g., `top_p`, from a Lua generation modifier or
from the settings menu during generation, the rest of the generation
will use the new setting value instead of retaining the settings it had
when generation began.
2021-12-21 22:12:24 -05:00
Gnome Ann caef3b7460 Disable `low_cpu_mem_usage` when using GPT-2
Attempting to use transformers 4.11.0's experimental `low_cpu_mem_usage`
feature with GPT-2 models usually results in the output repeating a
token over and over or otherwise containing an incoherent response.
2021-12-20 19:54:19 -05:00
henk717 7b56940ed7
Merge pull request #47 from VE-FORBRYDERNE/scripting
Lua API fixes
2021-12-20 04:32:25 +01:00
Gnome Ann 341b153360 Lua API fixes
* `print()` and `warn()` now work correctly with `nil` arguments
* Typo: `gpt-neo-1.3M` has been corrected to `gpt-neo-1.3B`
* Regeneration is no longer triggered when writing to `keysecondary` of
  a non-selective key
* Handle `genamt` changes in generation modifier properly
* Writing to `kobold.settings.numseqs` from a generation modifier no
  longer affects
* Formatting options in `kobold.settings` have been fixed
* Added aliases for setting names
* Fix behaviour of editing story chunks from a generation modifier
* Warnings are now yellow instead of red
* kobold.logits is now the raw logits prior to being filtered, like
  the documentation says, rather than after being filtered
* Some erroneous comments and error messages have been corrected
* These parts of the API have now been implemented properly:
    * `compute_context()` methods
    * `kobold.authorsnote`
    * `kobold.restart_generation()`
2021-12-19 20:18:28 -05:00
Gnome Ann 6aba869fb7 Make sure uninitialized WI entries are given UIDs when loading saves 2021-12-18 18:00:06 -05:00
Gnome Ann 769333738d Fix behaviour of `kobold.outputs` with read-only and no prompt gen 2021-12-17 12:59:01 -05:00
henk717 6d9063fb8b No Prompt Gen
Allow people to enter a prompt without generating anything by the AI. Combined with the always add prompt this is a very useful feature that allows people to write world information first, and then do a specific action. This mimics the behavior previously seen in AI Dungeon forks where it prompts for world information and then asks an action and can be particularly useful for people who want the prompt to always be part of the generation.
2021-12-16 12:47:44 +01:00
henk717 f3b4ecabca
Merge pull request #44 from VE-FORBRYDERNE/patch
Fix an error that occurs when all layers are on second GPU
2021-12-16 01:43:03 +01:00
henk717 e3d9c2d690 New download machanism
Automatically converts Huggingface cache models to full models on (down)load.
WARNING: Does wipe old cache/ dir inside the KoboldAI folder, make a backup before you run these models if you are bandwith constraint.
2021-12-16 01:40:04 +01:00
Gnome Ann 19d2356253 Fix an error that occurs when all layers are on second GPU 2021-12-15 19:03:49 -05:00
henk717 5e3e3f3578 Fix float16 models
Forcefully convert float16 models to work on the CPU
2021-12-16 00:31:51 +01:00
Gnome Ann 9097aac4a8 Show full stack trace for generator errors to help in diagnosing errors 2021-12-15 02:03:08 -05:00
Gnome Ann 2687135e05 Fix a strange bug where max tokens was capped at 1024
This seems to be related to the model config files, because only certain
models have this problem, and replacing ALL configuration files of a
"bad" model with those of a "good" model of the same type would fix the
problem.

Shouldn't be required anymore.
2021-12-15 00:45:41 -05:00
Gnome Ann 1551c45ba4 Prevent dynamic scanning from generating too many tokens 2021-12-14 23:39:04 -05:00