henk717
56c2e619f9
ColabKobold
...
A brand new launcher to power the colab's, you can use https://henk.tech/ckds as a short URL which points towards this github
2021-11-27 03:44:08 +01:00
henk717
3b976c9af7
Updated defaults
...
Transformers official by default, no more Git versions
2021-11-27 03:14:47 +01:00
henk717
6008d4f3a5
Merge pull request #39 from VE-FORBRYDERNE/breakmodel
...
Official transformers 6B breakmodel support and more RAM-efficient model loading
2021-11-27 01:11:48 +01:00
Gnome Ann
e5e2fb088a
Remember to actually import `GPTJModel`
2021-11-26 12:38:52 -05:00
Gnome Ann
871ed65570
Remove an unnecessary `**maybe_low_cpu_mem_usage()`
2021-11-26 11:42:04 -05:00
Gnome Ann
a93a76eb01
Load model directly in fp16 if using GPU or breakmodel
2021-11-26 10:55:52 -05:00
Gnome Ann
95aff61781
Don't pin CPU layers after running out of pinned memory
2021-11-26 10:31:15 -05:00
Gnome Ann
32e1d4a7a8
Enable `low_cpu_mem_usage`
2021-11-25 18:09:25 -05:00
Gnome Ann
25c9be5d02
Breakmodel support for GPTJModel
2021-11-25 18:09:16 -05:00
Gnome Ann
f8bcc3411b
In breakmodel mode, move layers to GPU as soon as model loads
...
Rather than during the first generation.
2021-11-25 11:44:41 -05:00
henk717
978dc486a5
Merge pull request #38 from VE-FORBRYDERNE/warp
...
Move TFS warper code into aiserver.py
2021-11-24 23:45:28 +01:00
Gnome Ann
cbb6efb656
Move TFS warper code into aiserver.py
2021-11-24 13:36:54 -05:00
henk717
96e1d98b7e
Merge branch 'KoboldAI:main' into united
2021-11-24 08:24:08 +01:00
henk717
36b9161667
Portability Bugfix
...
Fix an issue where the launcher does not work if the drive is not C: on some systems.
2021-11-24 08:23:08 +01:00
henk717
a2c82bbcc8
num_layers fixes
...
As requested by VE_FORBRYDERNE (Possibly implemented it on to many places, needs testing but since the other one is already broken I am committing it first so I can more easily test)
2021-11-24 03:44:11 +01:00
henk717
d7a2424d2d
No Half on CPU
...
Should fix CPU executions
2021-11-23 17:14:01 +01:00
henk717
11c64c3fe7
Merge pull request #37 from VE-FORBRYDERNE/patch
...
Use model.config.n_layer if model.config.num_layers doesn't exist
2021-11-23 17:02:51 +01:00
Gnome Ann
be0881a8d0
Use model.config.n_layer if model.config.num_layers doesn't exist
2021-11-23 10:09:24 -05:00
henk717
c0df03fc55
Merge pull request #36 from VE-FORBRYDERNE/sp
...
Fix a typo in tpu_mtj_backend.py
2021-11-23 14:23:10 +01:00
Gnome Ann
691febacd6
Fix a typo in tpu_mtj_backend.py
2021-11-22 12:53:19 -05:00
henk717
d877190258
Merge pull request #35 from VE-FORBRYDERNE/sp
...
Softprompt support for the TPU backend
2021-11-22 00:33:31 +01:00
Gnome Ann
9b8bcb5516
Always convert soft prompt to float32 if using TPU backend
...
TPUs do not support float16. Attempting to use a float16 soft prompt
throws an error.
2021-11-21 18:22:10 -05:00
Gnome Ann
e068aa9f26
Add soft prompt support to TPU backend
2021-11-21 18:08:04 -05:00
henk717
a60e7d3310
Merge pull request #34 from VE-FORBRYDERNE/comments
...
Add support for comments
2021-11-21 16:24:37 +01:00
Gnome Ann
df2768b745
Simplify the comment regex
2021-11-21 01:09:19 -05:00
Gnome Ann
7ab0d96b8a
Change the comment regex again to use fixed-length lookbehind
2021-11-21 01:06:31 -05:00
Gnome Ann
a1c378deea
Fix CSS issues when editing a chunk that has a comment
2021-11-21 00:48:43 -05:00
Gnome Ann
624cfbd5a4
Use a smarter regex for comments
...
If the beginning of the comment is at the beginning of a line AND the
end of a comment is at the end of a line, an additional newline will now
be ignored so that the AI doesn't see a blank line where the comment
was.
For example, consider the following message:
```
Hello
<|This is
a comment|>
World
```
The AI will now see this:
```
Hello
World
```
instead of this:
```
Hello
World
```
2021-11-21 00:42:57 -05:00
Gnome Ann
a51f88aeb3
Also apply comment formatting to prompt in `refresh_story()`
2021-11-21 00:26:45 -05:00
Gnome Ann
1968be82bb
Remove comments from prompt in WI processor and InferKit mode
2021-11-20 22:23:06 -05:00
Gnome Ann
8ce8e621ce
Fix typo (one of the `comregex_ui` should be `comregex_ai`)
2021-11-20 22:19:12 -05:00
henk717
d7edd9d04b
Merge pull request #33 from VE-FORBRYDERNE/loader
...
Fix a typo in requirements_mtj.txt
2021-11-21 04:09:12 +01:00
Gnome Ann
c2ed31de28
Add syntax for comments <|...|>
2021-11-20 01:27:57 -05:00
Gnome Ann
68e4b66fc5
Fix a typo in requirements_mtj.txt
2021-11-19 22:28:34 -05:00
henk717
409be6645a
Finetune version of rocm
...
Seperate file so people can easily go back to the legacy implementation based on finetune (Recommended until Huggingface's compatibility is improved) . You can install and use both.
2021-11-20 03:14:18 +01:00
henk717
50defbaa04
Merge pull request #32 from VE-FORBRYDERNE/loader
...
Move the TPU backend code into this repository
2021-11-20 01:01:18 +01:00
Gnome Ann
286ed51534
Add a requirements.txt for TPU backend
2021-11-19 18:20:02 -05:00
Gnome Ann
a65c4de840
Integrate TPU backend
...
This commit puts the TPU backend code directly in to the KoboldAI code
to make it easier to modify.
2021-11-19 18:06:57 -05:00
henk717
b926170fb0
Merge branch 'KoboldAI:main' into united
2021-11-19 00:05:21 +01:00
henk717
4e791b2f2d
Merge pull request #82 from VE-FORBRYDERNE/editor
...
Fix some editor issues in Firefox and possibly mobile browsers
2021-11-19 00:04:31 +01:00
Gnome Ann
bb51198f40
Fix some editor issues in Firefox and possibly mobile browsers
...
When Firefox 93.0 was released, they broke the ability to edit text
across multiple chunks or across multiple paragraphs. If you tried,
nothing would happen.
Also, we are no longer using Mutation Observers to detect when a chunk
is modified. We are now using the beforeinput event.
2021-11-18 13:18:18 -05:00
henk717
4a678deaa5
Merge branch 'KoboldAI:main' into united
2021-11-18 06:51:44 +01:00
henk717
9b73d6a913
Merge pull request #81 from VE-FORBRYDERNE/patch
...
Replace slashes in model name with underscores
2021-11-18 06:51:21 +01:00
henk717
b25c54cf91
Polishing and Optimizations
...
Multiple things have changed, for now models default to half mode even on the official transformers to make sure its as efficient on the GPU as finetune's. GPU selection is streamlined and cache files are now stored inside the KoboldAI folder (for the most part). A new command line parameter to force the models to run at their full size still needs to be added for the few users that would want a quality bump at the cost of ram.
2021-11-18 00:06:57 +01:00
henk717
27ee45b9cc
Merge pull request #31 from VE-FORBRYDERNE/cpu
...
Fix gen_in device logic in generate()
2021-11-17 22:42:31 +01:00
Gnome Ann
2f0b673b28
Fix gen_in device logic in generate()
2021-11-17 16:37:37 -05:00
henk717
e71271933a
Merge pull request #29 from VE-FORBRYDERNE/hidden-size
...
Fix hidden size detection for GPTJForCausalLM
2021-11-17 22:30:24 +01:00
henk717
26eb2cb6ce
Merge pull request #30 from VE-FORBRYDERNE/dynamic-scan
...
Support for multiple gens per action with dynamic scan
2021-11-17 22:30:12 +01:00
Gnome Ann
a1bc10246c
Support for multiple gens per action with dynamic scan
2021-11-17 16:17:59 -05:00
henk717
485034b6bb
ROCm Conda
...
Allows anyone to easily create a ROCm compatible conda environment. Currently set to the newer transformers, you can edit the github link if you want the finetune one.
2021-11-17 22:15:01 +01:00