Gnome Ann
|
73aecc0510
|
Divide NeoX replicated bias layers by 4 again instead of by 8
|
2022-03-20 01:04:55 -04:00 |
henk717
|
f1487a4551
|
New Linux Runtime
|
2022-03-20 00:00:21 +01:00 |
henk717
|
a7f652f293
|
Merge pull request #101 from VE-FORBRYDERNE/neox
GPT-NeoX-20B support in Colab TPU instances
|
2022-03-19 09:56:15 +01:00 |
Gnome Ann
|
05fc46b253
|
Changing this again to divide by 8
|
2022-03-19 02:09:41 -04:00 |
Gnome Ann
|
b1125a6705
|
Add EOS and padding token to default NeoX badwords
|
2022-03-19 01:30:02 -04:00 |
Gnome Ann
|
6c20d0d657
|
Nevermind, dividing by 4 is actually correct...
|
2022-03-19 00:55:04 -04:00 |
Gnome Ann
|
f16b61ec77
|
Should divide NeoX replicated parameters by 8 (not by 4)
Also, suppresses the PyTorch 1.11 warning about transposing tensors with
ndim != 2 in the new code
|
2022-03-19 00:48:33 -04:00 |
Gnome Ann
|
c2c139e940
|
Change default PE type for NeoX to `neox_rotary`
|
2022-03-19 00:26:04 -04:00 |
Gnome Ann
|
85a4959efa
|
Merge branch 'united' into neox
|
2022-03-18 11:19:03 -04:00 |
henk717
|
f581fe89cb
|
Torch version changes
|
2022-03-17 21:11:36 +01:00 |
henk717
|
9e9c1c3fe0
|
Merge pull request #100 from VE-FORBRYDERNE/patch
Add PyTorch 1.11 support for lazy loader
|
2022-03-17 21:06:38 +01:00 |
Gnome Ann
|
c444260eac
|
Silence PyTorch warning about transposing tensors with dimension != 2
|
2022-03-17 15:16:56 -04:00 |
Gnome Ann
|
ef21ab9c91
|
PyTorch 1.9 lazy loader compatibility bugfix
|
2022-03-17 14:10:51 -04:00 |
Gnome Ann
|
eaf190469d
|
Add PyTorch 1.11 support for lazy loader
|
2022-03-17 12:51:41 -04:00 |
henk717
|
9235754eb9
|
Dependency Fixes
|
2022-03-17 00:35:59 +01:00 |
henk717
|
a3e5e052b3
|
Newer umamba + slope tweak
|
2022-03-16 18:34:02 +01:00 |
Gnome Ann
|
95c4251db9
|
Print two newlines before loading HF models
|
2022-03-15 13:58:53 -04:00 |
Gnome Ann
|
9e2848e48f
|
Show parameter count when loading GPT-NeoX in Colab TPU instance
|
2022-03-15 13:55:27 -04:00 |
Gnome Ann
|
9dc48b15f0
|
Add custom badwords and pad token ID for GPT-NeoX
|
2022-03-14 23:31:49 -04:00 |
Gnome Ann
|
88f247d535
|
GPT-NeoX-20B support in Colab TPU instances
|
2022-03-14 23:14:20 -04:00 |
henk717
|
4892556059
|
Model saving for colab mode
|
2022-03-13 11:22:44 +01:00 |
henk717
|
ccadeabbde
|
Merge pull request #99 from VE-FORBRYDERNE/model-patch
Model loading fixes
|
2022-03-13 11:10:15 +01:00 |
Gnome Ann
|
2b8c46338e
|
Change current working directory to KoboldAI folder
|
2022-03-13 01:22:11 -05:00 |
Gnome Ann
|
48d07adb54
|
Also fallback to generic GPT2 tokenizer in Colab TPU instances
|
2022-03-12 23:19:35 -05:00 |
henk717
|
d29a629320
|
Merge pull request #98 from ebolam/united
Fix for retry
|
2022-03-12 16:52:07 +01:00 |
ebolam
|
45eed78d21
|
Merge branch 'united' of https://github.com/ebolam/KoboldAI into united
|
2022-03-12 10:33:01 -05:00 |
ebolam
|
b55e5a8e0b
|
Retry Bug Fix
|
2022-03-12 10:32:27 -05:00 |
henk717
|
2e1b3c82f9
|
Merge pull request #97 from ebolam/united
Fix for retry causing issues for future redo actions
|
2022-03-11 17:41:49 +01:00 |
ebolam
|
ae854bab3d
|
Fix for retry causing issues for future redo actions
|
2022-03-11 11:40:55 -05:00 |
henk717
|
2c66461c14
|
Merge pull request #96 from VE-FORBRYDERNE/dlpack
Use DLPack to convert PyTorch tensors to JAX arrays
|
2022-03-10 22:00:38 +01:00 |
Gnome Ann
|
a99eb8724d
|
Use DLPack to convert PyTorch tensors to JAX arrays
|
2022-03-10 15:12:42 -05:00 |
henk717
|
b02d5e8696
|
Allows missing model_config again
|
2022-03-10 19:59:10 +01:00 |
henk717
|
172a548fa1
|
Fallback to generic GPT2 Tokenizer
|
2022-03-10 19:52:15 +01:00 |
henk717
|
68281184bf
|
Remove Lowmem from TPU
|
2022-03-09 19:21:15 +01:00 |
henk717
|
9dee9b5c6d
|
Ignore incorrect problems
|
2022-03-09 12:03:37 +01:00 |
henk717
|
a28e553412
|
Remove unused gettokenids
|
2022-03-09 11:59:33 +01:00 |
henk717
|
7434c9221b
|
Expand OAI Setting Compatibility
|
2022-03-07 08:56:47 +01:00 |
ebolam
|
f6c95f18fa
|
Fix for Redo (#94)
* Corrected redo to skip blank steps (blank from "deleting" the chunk with the edit function)
* Removed debug code
|
2022-03-06 23:18:14 +01:00 |
henk717
|
f857696224
|
OAI ConfigName Bugfix
|
2022-03-06 20:18:42 +01:00 |
henk717
|
3ddc9647eb
|
Basic GooseAI Support
|
2022-03-06 20:10:30 +01:00 |
henk717
|
f1b0ea711e
|
Merge branch 'KoboldAI:main' into united
|
2022-03-06 19:02:59 +01:00 |
henk717
|
932aabc2f3
|
Merge pull request #103 from henk717/main
Modern ROCm Docker
|
2022-03-06 19:02:38 +01:00 |
henk717
|
4332074c89
|
Modern ROCm Docker
Brings the ROCm container up to a modern standard in line with the CUDA docker.
|
2022-03-06 19:01:25 +01:00 |
henk717
|
4835192041
|
Load TK on demand
|
2022-03-06 14:12:01 +01:00 |
henk717
|
daea4b8d15
|
Fix Breakmodel RAM Regression
|
2022-03-06 08:26:50 +01:00 |
henk717
|
105d3831b5
|
Lazy Load Float32 for CPU
|
2022-03-06 07:56:04 +01:00 |
henk717
|
77cc2ee789
|
Merge pull request #93 from VE-FORBRYDERNE/lazy-loader
Lazy loader
|
2022-03-05 20:32:31 +01:00 |
Gnome Ann
|
373f7b9bd5
|
Don't convert tensors to float16 if using CPU-only mode
|
2022-03-05 14:30:26 -05:00 |
Gnome Ann
|
579e85820c
|
Resolve merge conflict
|
2022-03-05 14:13:56 -05:00 |
Gnome Ann
|
2e19ea1bb6
|
Auto detect if we're in a Colab TPU instance
|
2022-03-05 14:07:23 -05:00 |