Commit Graph

980 Commits

Author SHA1 Message Date
Noli 6ed50ee1e9 make the throttle timer a dict to keep track of which slider has been changed 2022-03-25 20:37:45 +01:00
nolialsea 1de4944d46
Add throttle closure for settings sliders
Adds a throttling closure to add a waiting time before calling a callback,
Uses this closure to throttle the event fired by socketio on slider value change
2022-03-25 20:08:56 +01:00
henk717 e4c72ca2e5
Merge pull request #104 from VE-FORBRYDERNE/retry-randomgame
Allow regenerating random story using Retry button
2022-03-24 12:57:04 +01:00
Gnome Ann 0348970b19 Make sure AI is not busy when using retry to regenerate random story 2022-03-23 22:09:35 -04:00
Gnome Ann 4832dd6f37 Allow regenerating random story using Retry button
Commit b55e5a8e0b removed this feature, so
this commit adds it back.
2022-03-23 13:39:46 -04:00
henk717 38d78d10db
Merge pull request #103 from VE-FORBRYDERNE/neox
Divide GPT-NeoX replicated bias layers by 4 again instead of by 8
2022-03-21 02:19:32 +01:00
henk717 cf99f02ca5 Merge branch 'main' into united 2022-03-20 19:22:53 +01:00
henk717 20eab085dd Fix AutoSave Toggle 2022-03-20 19:12:11 +01:00
henk717 5c795609e4 KML Fix 2022-03-20 13:10:56 +01:00
Gnome Ann 73aecc0510 Divide NeoX replicated bias layers by 4 again instead of by 8 2022-03-20 01:04:55 -04:00
henk717 f1487a4551 New Linux Runtime 2022-03-20 00:00:21 +01:00
henk717 a7f652f293
Merge pull request #101 from VE-FORBRYDERNE/neox
GPT-NeoX-20B support in Colab TPU instances
2022-03-19 09:56:15 +01:00
Gnome Ann 05fc46b253 Changing this again to divide by 8 2022-03-19 02:09:41 -04:00
Gnome Ann b1125a6705 Add EOS and padding token to default NeoX badwords 2022-03-19 01:30:02 -04:00
Gnome Ann 6c20d0d657 Nevermind, dividing by 4 is actually correct... 2022-03-19 00:55:04 -04:00
Gnome Ann f16b61ec77 Should divide NeoX replicated parameters by 8 (not by 4)
Also, suppresses the PyTorch 1.11 warning about transposing tensors with
ndim != 2 in the new code
2022-03-19 00:48:33 -04:00
Gnome Ann c2c139e940 Change default PE type for NeoX to `neox_rotary` 2022-03-19 00:26:04 -04:00
Gnome Ann 85a4959efa Merge branch 'united' into neox 2022-03-18 11:19:03 -04:00
henk717 f581fe89cb Torch version changes 2022-03-17 21:11:36 +01:00
henk717 9e9c1c3fe0
Merge pull request #100 from VE-FORBRYDERNE/patch
Add PyTorch 1.11 support for lazy loader
2022-03-17 21:06:38 +01:00
Gnome Ann c444260eac Silence PyTorch warning about transposing tensors with dimension != 2 2022-03-17 15:16:56 -04:00
Gnome Ann ef21ab9c91 PyTorch 1.9 lazy loader compatibility bugfix 2022-03-17 14:10:51 -04:00
Gnome Ann eaf190469d Add PyTorch 1.11 support for lazy loader 2022-03-17 12:51:41 -04:00
henk717 9235754eb9 Dependency Fixes 2022-03-17 00:35:59 +01:00
henk717 a3e5e052b3 Newer umamba + slope tweak 2022-03-16 18:34:02 +01:00
Gnome Ann 95c4251db9 Print two newlines before loading HF models 2022-03-15 13:58:53 -04:00
Gnome Ann 9e2848e48f Show parameter count when loading GPT-NeoX in Colab TPU instance 2022-03-15 13:55:27 -04:00
Gnome Ann 9dc48b15f0 Add custom badwords and pad token ID for GPT-NeoX 2022-03-14 23:31:49 -04:00
Gnome Ann 88f247d535 GPT-NeoX-20B support in Colab TPU instances 2022-03-14 23:14:20 -04:00
henk717 4892556059 Model saving for colab mode 2022-03-13 11:22:44 +01:00
henk717 ccadeabbde
Merge pull request #99 from VE-FORBRYDERNE/model-patch
Model loading fixes
2022-03-13 11:10:15 +01:00
Gnome Ann 2b8c46338e Change current working directory to KoboldAI folder 2022-03-13 01:22:11 -05:00
Gnome Ann 48d07adb54 Also fallback to generic GPT2 tokenizer in Colab TPU instances 2022-03-12 23:19:35 -05:00
henk717 d29a629320
Merge pull request #98 from ebolam/united
Fix for retry
2022-03-12 16:52:07 +01:00
ebolam 45eed78d21 Merge branch 'united' of https://github.com/ebolam/KoboldAI into united 2022-03-12 10:33:01 -05:00
ebolam b55e5a8e0b Retry Bug Fix 2022-03-12 10:32:27 -05:00
henk717 2e1b3c82f9
Merge pull request #97 from ebolam/united
Fix for retry causing issues for future redo actions
2022-03-11 17:41:49 +01:00
ebolam ae854bab3d Fix for retry causing issues for future redo actions 2022-03-11 11:40:55 -05:00
henk717 2c66461c14
Merge pull request #96 from VE-FORBRYDERNE/dlpack
Use DLPack to convert PyTorch tensors to JAX arrays
2022-03-10 22:00:38 +01:00
Gnome Ann a99eb8724d Use DLPack to convert PyTorch tensors to JAX arrays 2022-03-10 15:12:42 -05:00
henk717 b02d5e8696 Allows missing model_config again 2022-03-10 19:59:10 +01:00
henk717 172a548fa1 Fallback to generic GPT2 Tokenizer 2022-03-10 19:52:15 +01:00
henk717 68281184bf Remove Lowmem from TPU 2022-03-09 19:21:15 +01:00
henk717 9dee9b5c6d Ignore incorrect problems 2022-03-09 12:03:37 +01:00
henk717 a28e553412 Remove unused gettokenids 2022-03-09 11:59:33 +01:00
henk717 7434c9221b Expand OAI Setting Compatibility 2022-03-07 08:56:47 +01:00
ebolam f6c95f18fa
Fix for Redo (#94)
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)

* Removed debug code
2022-03-06 23:18:14 +01:00
henk717 f857696224 OAI ConfigName Bugfix 2022-03-06 20:18:42 +01:00
henk717 3ddc9647eb Basic GooseAI Support 2022-03-06 20:10:30 +01:00
henk717 f1b0ea711e
Merge branch 'KoboldAI:main' into united 2022-03-06 19:02:59 +01:00