661cca63e8
Make sure stopping criteria still work with dynamic scan off
2021-12-13 18:10:51 -05:00
338d437ea3
Use eventlet instead of gevent-websocket
2021-12-13 17:19:04 -05:00
34c52a1a23
Remove escape characters from all error messages
2021-12-13 11:47:34 -05:00
11f9866dbe
Enable more of the IO library in Lua sandbox
...
Also changes the Lua warning color to red.
2021-12-13 11:22:58 -05:00
28e86563b8
Change self.scores
to scores
in aiserver.py
2021-12-13 11:18:01 -05:00
82e149ee02
Catch Lua errors properly
2021-12-13 02:32:09 -05:00
5f06d20085
Format Lua printed messages and warnings
2021-12-13 01:59:53 -05:00
d2f5544468
Add Userscripts menu into GUI
2021-12-13 01:03:26 -05:00
5d13339a52
Allow the retry button to call the Lua scripts properly
2021-12-12 20:48:10 -05:00
39bfb0862a
Allow user input to be modified from Lua
...
Also adds some handlers in the Lua code for when the game is not started
yet
2021-12-12 20:44:03 -05:00
fbf3e7615b
Add API for generated tokens and output text
2021-12-12 19:27:20 -05:00
ceabd2ef7b
Add Lua API for editing logits during generation
...
TPU backend not supported yet.
2021-12-12 16:18:45 -05:00
e2c3ac041b
Complete the Lua generation halting API
2021-12-12 12:52:03 -05:00
d76dd35791
Add Lua API for reading model information
2021-12-12 12:09:59 -05:00
00eb125ad0
Allow Lua API to toggle dynamic scan
2021-12-12 01:55:46 -05:00
5692a7dfe2
Add Lua API for reading the text the user submitted to the AI
2021-12-12 01:52:42 -05:00
03453c4e27
Change script directory tree
...
Userscripts have been moved from /scripts/userscripts to /userscripts.
Core scripts have been moved from /scripts/corescripts to /cores.
2021-12-11 23:46:30 -05:00
36209bfe69
Add Lua API for story chunks
2021-12-11 23:44:07 -05:00
8e6a62259e
Fix the Lua tokenizer API
2021-12-11 21:24:34 -05:00
67974947b2
Fix numerous problems in the Lua world info API
2021-12-11 19:11:38 -05:00
3327f1b471
Fix Lua settings API
2021-12-11 17:01:41 -05:00
f8aa578f41
Enable generation modifiers for transformers backend only
2021-12-11 16:28:25 -05:00
e289a0d360
Connect bridge.lua to aiserver.py
...
Also enables the use of input modifiers and output modifiers, but not
generation modifiers.
2021-12-11 12:45:45 -05:00
35966b2007
Upload bridge.lua, default.lua and some Lua libs
...
base64
inspect
json.lua
Lua-hashings
Lua-nums
Moses
mt19937ar-lua
Penlight
Serpent
2021-12-10 19:45:57 -05:00
683bcb824f
Merge branch 'united' into world-info
2021-12-05 13:06:32 -05:00
6d8517e224
Fix some minor coding errors
2021-12-05 11:39:59 -05:00
150ce033c9
TPU backend no longer needs to recompile after changing softprompt
2021-12-05 02:49:15 -05:00
b99ac92a52
WI folders and WI drag-and-drop
2021-12-04 23:59:28 -05:00
44d8068bab
Ngrok Support
...
Not recommended for home users due to DDoS risks, but might make Colab tunnels more reliable.
2021-11-29 18:11:14 +01:00
9f51c42dd4
Allow bad words filter to ban <|endoftext|> token
...
The official transformers bad words filter doesn't allow this by
default. Finetune's version does allow this by default, however.
2021-11-27 11:42:06 -05:00
2bc93ba37a
Whitelist 6B in breakmodel
...
Now that we properly support it, allow the menu option to use breakmodel
2021-11-27 10:09:54 +01:00
b56ee07ffa
Fix for CPU mode
...
Recent optimizations caused the CPU version to load in an incompatible format, now we convert it back to the correct format after loading it efficiently first.
2021-11-27 05:34:29 +01:00
e5e2fb088a
Remember to actually import GPTJModel
2021-11-26 12:38:52 -05:00
871ed65570
Remove an unnecessary **maybe_low_cpu_mem_usage()
2021-11-26 11:42:04 -05:00
a93a76eb01
Load model directly in fp16 if using GPU or breakmodel
2021-11-26 10:55:52 -05:00
32e1d4a7a8
Enable low_cpu_mem_usage
2021-11-25 18:09:25 -05:00
25c9be5d02
Breakmodel support for GPTJModel
2021-11-25 18:09:16 -05:00
f8bcc3411b
In breakmodel mode, move layers to GPU as soon as model loads
...
Rather than during the first generation.
2021-11-25 11:44:41 -05:00
cbb6efb656
Move TFS warper code into aiserver.py
2021-11-24 13:36:54 -05:00
a2c82bbcc8
num_layers fixes
...
As requested by VE_FORBRYDERNE (Possibly implemented it on to many places, needs testing but since the other one is already broken I am committing it first so I can more easily test)
2021-11-24 03:44:11 +01:00
d7a2424d2d
No Half on CPU
...
Should fix CPU executions
2021-11-23 17:14:01 +01:00
be0881a8d0
Use model.config.n_layer if model.config.num_layers doesn't exist
2021-11-23 10:09:24 -05:00
9b8bcb5516
Always convert soft prompt to float32 if using TPU backend
...
TPUs do not support float16. Attempting to use a float16 soft prompt
throws an error.
2021-11-21 18:22:10 -05:00
e068aa9f26
Add soft prompt support to TPU backend
2021-11-21 18:08:04 -05:00
df2768b745
Simplify the comment regex
2021-11-21 01:09:19 -05:00
7ab0d96b8a
Change the comment regex again to use fixed-length lookbehind
2021-11-21 01:06:31 -05:00
624cfbd5a4
Use a smarter regex for comments
...
If the beginning of the comment is at the beginning of a line AND the
end of a comment is at the end of a line, an additional newline will now
be ignored so that the AI doesn't see a blank line where the comment
was.
For example, consider the following message:
```
Hello
<|This is
a comment|>
World
```
The AI will now see this:
```
Hello
World
```
instead of this:
```
Hello
World
```
2021-11-21 00:42:57 -05:00
a51f88aeb3
Also apply comment formatting to prompt in refresh_story()
2021-11-21 00:26:45 -05:00
1968be82bb
Remove comments from prompt in WI processor and InferKit mode
2021-11-20 22:23:06 -05:00
8ce8e621ce
Fix typo (one of the comregex_ui
should be comregex_ai
)
2021-11-20 22:19:12 -05:00