Compare commits

...

10 Commits

Author SHA1 Message Date
Stepan Zuev 21b69ad676 configurable tmp path 2024-04-05 07:42:11 +03:00
Stepan Zuev 6f71fa65fb smart transcript fix 2024-04-05 07:00:07 +03:00
Stepan Zuev 871ae700df Merge branch 'master' of github.com:zuev-stepan/VoiceCraft 2024-04-05 04:44:54 +03:00
Stepan Zuev 94e9f9bd42 README update, gradio_app.ipynb update, debug print removed 2024-04-05 04:40:57 +03:00
Puyuan Peng 0d19fa5d03
fix whisperx loading issue, update generation instruction 2024-04-04 20:31:07 -05:00
Puyuan Peng 97b1f51947
install whisperx deps 2024-04-04 20:26:28 -05:00
zuev-stepan bbe3437b8d
Merge branch 'jasonppy:master' into master 2024-04-05 02:56:01 +03:00
jason-on-salt-a40 2506954b64 modify the Dockerfile, download correct lib versions 2024-04-04 12:49:37 -07:00
Dat Tran d7bd72237f update start jupyter script 2024-04-03 14:23:28 +02:00
Dat Tran 1a1390e587 add Dockerfile 2024-04-03 14:21:46 +02:00
7 changed files with 97 additions and 212 deletions

26
Dockerfile Normal file
View File

@ -0,0 +1,26 @@
FROM jupyter/base-notebook:python-3.9.13
USER root
# Install OS dependencies
RUN apt-get update && apt-get install -y git-core ffmpeg espeak-ng && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Update Conda, create the voicecraft environment, and install dependencies
RUN conda update -y -n base -c conda-forge conda && \
conda create -y -n voicecraft python=3.9.16 && \
conda run -n voicecraft conda install -y -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068 && \
conda run -n voicecraft pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft && \
conda run -n voicecraft pip install xformers==0.0.22 && \
conda run -n voicecraft pip install torch==2.0.1 && \
conda run -n voicecraft pip install torchaudio==2.0.2 && \
conda run -n voicecraft pip install tensorboard==2.16.2 && \
conda run -n voicecraft pip install phonemizer==3.2.1 && \
conda run -n voicecraft pip install datasets==2.16.0 && \
conda run -n voicecraft pip install torchmetrics==0.11.1
# Install the Jupyter kernel
RUN conda install -n voicecraft ipykernel --update-deps --force-reinstall -y && \
conda run -n voicecraft python -m ipykernel install --name=voicecraft

View File

@ -21,8 +21,8 @@ To clone or edit an unseen voice, VoiceCraft needs only a few seconds of referen
- [ ] HuggingFace Spaces demo
- [ ] Better guidance on training/finetuning
## How to run TTS inference
There are two ways:
## How to run TTS inference
There are two ways:
1. with docker. see [quickstart](#quickstart)
2. without docker. see [envrionment setup](#environment-setup)
@ -31,7 +31,7 @@ When you are inside the docker image or you have installed all dependencies, Che
If you want to do model development such as training/finetuning, I recommend following [envrionment setup](#environment-setup) and [training](#training).
## QuickStart
:star: To try out TTS inference with VoiceCraft, the best way is using docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
:star: To try out TTS inference with VoiceCraft, the best way is using docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
Tested on Linux and Windows and should work with any host with docker installed.
```bash
@ -43,23 +43,26 @@ cd VoiceCraft
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/1.13.5/install-guide.html
# sudo apt-get install -y nvidia-container-toolkit-base || yay -Syu nvidia-container-toolkit || echo etc...
# 3. Try to start an existing container otherwise create a new one passing in all GPUs
# 3. First build the docker image
docker build --tag "voicecraft" .
# 4. Try to start an existing container otherwise create a new one passing in all GPUs
./start-jupyter.sh # linux
start-jupyter.bat # windows
# 4. now open a webpage on the host box to the URL shown at the bottom of:
# 5. now open a webpage on the host box to the URL shown at the bottom of:
docker logs jupyter
# 5. optionally look inside from another terminal
# 6. optionally look inside from another terminal
docker exec -it jupyter /bin/bash
export USER=(your_linux_username_used_above)
export HOME=/home/$USER
sudo apt-get update
# 6. confirm video card(s) are visible inside container
# 7. confirm video card(s) are visible inside container
nvidia-smi
# 7. Now in browser, open inference_tts.ipynb and work through one cell at a time
# 8. Now in browser, open inference_tts.ipynb and work through one cell at a time
echo GOOD LUCK
```
@ -83,6 +86,10 @@ conda install -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=
# to run ipynb
conda install -n voicecraft ipykernel --no-deps --force-reinstall
# below is only needed if you want to run gradio_app.py
sudo apt-get install espeak # NOTE: only required if you want to use gradio_app, which is used by whisperx for forced alignment
sudo apt-get install libespeak-dev # NOTE: only required if you want to use gradio_app, which is used by whisperx for forced alignment
```
If you have encountered version issues when running things, checkout [environment.yml](./environment.yml) for exact matching.
@ -93,12 +100,18 @@ Checkout [`inference_speech_editing.ipynb`](./inference_speech_editing.ipynb) an
## Gradio
After environment setup install additional dependencies:
```bash
apt-get install -y espeak espeak-data libespeak1 libespeak-dev
apt-get install -y festival*
apt-get install -y build-essential
apt-get install -y flac libasound2-dev libsndfile1-dev vorbis-tools
apt-get install -y libxml2-dev libxslt-dev zlib1g-dev
pip install -r gradio_requirements.txt
```
Run gradio server from terminal or [`gradio_app.ipynb`](./gradio_app.ipynb):
```bash
python gradio_app.py
TMP_PATH=/tmp python gradio_app.py # if you want to change tmp folder path
```
It is ready to use on [default url](http://127.0.0.1:7860).
@ -121,13 +134,13 @@ Long TTS mode: Easy TTS on long texts
## Training
To train an VoiceCraft model, you need to prepare the following parts:
To train an VoiceCraft model, you need to prepare the following parts:
1. utterances and their transcripts
2. encode the utterances into codes using e.g. Encodec
3. convert transcripts into phoneme sequence, and a phoneme set (we named it vocab.txt)
4. manifest (i.e. metadata)
Step 1,2,3 are handled in [./data/phonemize_encodec_encode_hf.py](./data/phonemize_encodec_encode_hf.py), where
Step 1,2,3 are handled in [./data/phonemize_encodec_encode_hf.py](./data/phonemize_encodec_encode_hf.py), where
1. Gigaspeech is downloaded through HuggingFace. Note that you need to sign an agreement in order to download the dataset (it needs your auth token)
2. phoneme sequence and encodec codes are also extracted using the script.
@ -149,7 +162,7 @@ python phonemize_encodec_encode_hf.py \
where encodec_model_path is avaliable [here](https://huggingface.co/pyp1/VoiceCraft). This model is trained on Gigaspeech XL, it has 56M parameters, 4 codebooks, each codebook has 2048 codes. Details are described in our [paper](https://jasonppy.github.io/assets/pdfs/VoiceCraft.pdf). If you encounter OOM during extraction, try decrease the batch_size and/or max_len.
The extracted codes, phonemes, and vocab.txt will be stored at `path/to/store_extracted_codes_and_phonemes/${dataset_size}/{encodec_16khz_4codebooks,phonemes,vocab.txt}`.
As for manifest, please download train.txt and validation.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main), and put them under `path/to/store_extracted_codes_and_phonemes/manifest/`. Please also download vocab.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main) if you want to use our pretrained VoiceCraft model (so that the phoneme-to-token matching is the same).
As for manifest, please download train.txt and validation.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main), and put them under `path/to/store_extracted_codes_and_phonemes/manifest/`. Please also download vocab.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main) if you want to use our pretrained VoiceCraft model (so that the phoneme-to-token matching is the same).
Now, you are good to start training!
@ -168,7 +181,7 @@ first install it with `pip install g2p`
```python
from g2p import make_g2p
transducer = make_g2p('eng', 'eng-ipa')
transducer("hello").output_string
transducer("hello").output_string
# it will output: 'hʌloʊ'
``` -->

View File

@ -8,84 +8,28 @@
"### Only do the below if you are using docker"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "270aa2cc",
"metadata": {},
"outputs": [],
"source": [
"# install OS deps\n",
"!sudo apt-get update && sudo apt-get install -y \\\n",
" git-core \\\n",
" ffmpeg \\\n",
" espeak-ng"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8ba5f452",
"metadata": {},
"outputs": [],
"source": [
"# Update and setup Conda voicecraft environment\n",
"!conda update -y -n base -c conda-forge conda\n",
"!conda create -y -n voicecraft python=3.9.16 && \\\n",
" conda init bash"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "4ef2935c",
"metadata": {},
"outputs": [],
"source": [
"# install conda and pip stuff in the activated conda above context\n",
"!echo -e \"Grab a cup a coffee and a slice of pizza...\\n\\n\"\n",
"\n",
"# make sure $HOME and $USER are setup so this will source the conda environment\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068 && \\\n",
" pip install torch==2.0.1 && \\\n",
" pip install tensorboard==2.16.2 && \\\n",
" pip install phonemizer==3.2.1 && \\\n",
" pip install torchaudio==2.0.2 && \\\n",
" pip install datasets==2.16.0 && \\\n",
" pip install torchmetrics==0.11.1\n",
"\n",
"# do this one last otherwise you'll get an error about torch compiler missing due to xformer mismatch\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2fca57eb",
"metadata": {},
"outputs": [],
"source": [
"# okay setup the conda environment such that jupyter notebook can find the kernel\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -n voicecraft ipykernel --update-deps --force-reinstall\n",
"\n",
"# installs the Jupyter kernel into /home/myusername/.local/share/jupyter/kernels/voicecraft\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" python3 -m ipykernel install --user --name=voicecraft"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "961faa43",
"metadata": {},
"outputs": [],
"source": [
"!source ~/.bashrc && \\\n",
" apt-get update && \\\n",
" apt-get install -y espeak espeak-data libespeak1 libespeak-dev && \\\n",
" apt-get install -y festival* && \\\n",
" apt-get install -y build-essential && \\\n",
" apt-get install -y flac libasound2-dev libsndfile1-dev vorbis-tools && \\\n",
" apt-get install -y libxml2-dev libxslt-dev zlib1g-dev"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "598d75cf",
"metadata": {},
"outputs": [],
"source": [
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",

View File

@ -1,3 +1,6 @@
import os
# os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
# os.environ["CUDA_VISIBLE_DEVICES"] = "0" # for local use
import gradio as gr
import torch
import torchaudio
@ -6,14 +9,13 @@ from data.tokenizer import (
TextTokenizer,
)
from models import voicecraft
import os
import io
import numpy as np
import random
import uuid
TMP_PATH = "./demo/temp"
TMP_PATH = os.getenv("TMP_PATH", "./demo/temp")
device = "cuda" if torch.cuda.is_available() else "cpu"
whisper_model, align_model, voicecraft_model = None, None, None
@ -64,7 +66,7 @@ class WhisperModel:
class WhisperxModel:
def __init__(self, model_name, align_model: WhisperxAlignModel):
from whisperx import load_model
self.model = load_model(model_name, device, asr_options={"suppress_numerals": True})
self.model = load_model(model_name, device, asr_options={"suppress_numerals": True, "max_new_tokens": None, "clip_timestamps": None, "hallucination_silence_threshold": None})
self.align_model = align_model
def transcribe(self, audio_path):
@ -75,9 +77,6 @@ class WhisperxModel:
def load_models(whisper_backend_name, whisper_model_name, alignment_model_name, voicecraft_model_name):
global transcribe_model, align_model, voicecraft_model
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
if alignment_model_name is not None:
align_model = WhisperxAlignModel()
@ -178,7 +177,6 @@ def align(seed, transcript, audio_path):
} for fragment in fragments["fragments"]]
segments = align_model.align(segments, audio_path)
state = get_transcribe_state(segments)
print(state)
return [
state["transcript_with_start_time"], state["transcript_with_end_time"],
@ -237,10 +235,10 @@ def run(seed, left_margin, right_margin, codec_audio_sr, codec_sr, top_k, top_p,
target_transcript = ""
for word in transcribe_state["words_info"]:
if word["end"] < prompt_end_time:
target_transcript += word["word"]
target_transcript += word["word"] + (" " if word["word"][-1] != " " else "")
elif (word["start"] + word["end"]) / 2 < prompt_end_time:
# include part of the word it it's big, but adjust prompt_end_time
target_transcript += word["word"]
target_transcript += word["word"] + (" " if word["word"][-1] != " " else "")
prompt_end_time = word["end"]
break
else:
@ -265,13 +263,13 @@ def run(seed, left_margin, right_margin, codec_audio_sr, codec_sr, top_k, top_p,
target_transcript = ""
for word in transcribe_state["words_info"]:
if word["start"] < edit_start_time:
target_transcript += word["word"]
target_transcript += word["word"] + (" " if word["word"][-1] != " " else "")
else:
break
target_transcript += f" {sentence}"
for word in transcribe_state["words_info"]:
if word["end"] > edit_end_time:
target_transcript += word["word"]
target_transcript += word["word"] + (" " if word["word"][-1] != " " else "")
else:
target_transcript = sentence
@ -443,7 +441,7 @@ with gr.Blocks() as app:
with gr.Row():
with gr.Column(scale=2):
input_audio = gr.Audio(value="./demo/84_121550_000074_000000.wav", label="Input Audio", type="filepath")
input_audio = gr.Audio(value="./demo/84_121550_000074_000000.wav", label="Input Audio", type="filepath", interactive=True)
with gr.Group():
original_transcript = gr.Textbox(label="Original transcript", lines=5, value=demo_original_transcript,
info="Use whisper model to get the transcript. Fix and align it if necessary.")
@ -496,22 +494,22 @@ with gr.Blocks() as app:
rerun_btn = gr.Button(value="Rerun")
with gr.Row():
with gr.Accordion("VoiceCraft config", open=False):
seed = gr.Number(label="seed", value=-1, precision=0)
left_margin = gr.Number(label="left_margin", value=0.08)
right_margin = gr.Number(label="right_margin", value=0.08)
codec_audio_sr = gr.Number(label="codec_audio_sr", value=16000)
codec_sr = gr.Number(label="codec_sr", value=50)
top_k = gr.Number(label="top_k", value=0)
top_p = gr.Number(label="top_p", value=0.8)
temperature = gr.Number(label="temperature", value=1)
stop_repetition = gr.Radio(label="stop_repetition", choices=[-1, 1, 2, 3], value=3,
info="if there are long silence in the generated audio, reduce the stop_repetition to 3, 2 or even 1, -1 = disabled")
sample_batch_size = gr.Number(label="sample_batch_size", value=4, precision=0,
info="generate this many samples and choose the shortest one")
with gr.Accordion("Generation Parameters - change these if you are unhappy with the generation", open=False):
stop_repetition = gr.Radio(label="stop_repetition", choices=[-1, 1, 2, 3, 4], value=3,
info="if there are long silence in the generated audio, reduce the stop_repetition to 2 or 1. -1 = disabled")
sample_batch_size = gr.Number(label="speech rate", value=4, precision=0,
info="The higher the number, the faster the output will be. Under the hood, the model will generate this many samples and choose the shortest one")
seed = gr.Number(label="seed", value=-1, precision=0, info="random seeds always works :)")
kvcache = gr.Radio(label="kvcache", choices=[0, 1], value=1,
info="set to 0 to use less VRAM, but with slower inference")
silence_tokens = gr.Textbox(label="silence tokens", value="[1388,1898,131]")
left_margin = gr.Number(label="left_margin", value=0.08, info="margin to the left of the editing segment")
right_margin = gr.Number(label="right_margin", value=0.08, info="margin to the right of the editing segment")
top_p = gr.Number(label="top_p", value=0.8, info="0.8 is a good value, 0.9 is also good")
temperature = gr.Number(label="temperature", value=1, info="haven't try other values, do not recommend to change")
top_k = gr.Number(label="top_k", value=0, info="0 means we don't use topk sampling, because we use topp sampling")
codec_audio_sr = gr.Number(label="codec_audio_sr", value=16000, info='encodec specific, Do not change')
codec_sr = gr.Number(label="codec_sr", value=50, info='encodec specific, Do not change')
silence_tokens = gr.Textbox(label="silence tokens", value="[1388,1898,131]", info="encodec specific, do not change")
audio_tensors = gr.State()
@ -592,4 +590,4 @@ with gr.Blocks() as app:
if __name__ == "__main__":
app.launch()
app.launch()

View File

@ -5,106 +5,14 @@
"metadata": {},
"source": [
"VoiceCraft Inference Text To Speech Demo\n",
"===\n",
"This will install a ton of dependencies all over so consider using the provided docker container start-jupyter script to keep the cruft off your dev box.\n",
"\n",
"Run the next cells one at a time up until the *STOP* and follow those instructions before continuing. You only have to do this the first time to setup the container."
"==="
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Only do the below if you are using docker"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# install OS deps\n",
"!sudo apt-get update && sudo apt-get install -y \\\n",
" git-core \\\n",
" ffmpeg \\\n",
" espeak-ng"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Update and setup Conda voicecraft environment\n",
"!conda update -y -n base -c conda-forge conda\n",
"!conda create -y -n voicecraft python=3.9.16 && \\\n",
" conda init bash"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# install conda and pip stuff in the activated conda above context\n",
"!echo -e \"Grab a cup a coffee and a slice of pizza...\\n\\n\"\n",
"\n",
"# make sure $HOME and $USER are setup so this will source the conda environment\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068 && \\\n",
" pip install torch==2.0.1 && \\\n",
" pip install tensorboard==2.16.2 && \\\n",
" pip install phonemizer==3.2.1 && \\\n",
" pip install torchaudio==2.0.2 && \\\n",
" pip install datasets==2.16.0 && \\\n",
" pip install torchmetrics==0.11.1\n",
"\n",
"# do this one last otherwise you'll get an error about torch compiler missing due to xformer mismatch\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# okay setup the conda environment such that jupyter notebook can find the kernel\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -n voicecraft ipykernel --update-deps --force-reinstall\n",
"\n",
"# installs the Jupyter kernel into /home/myusername/.local/share/jupyter/kernels/voicecraft\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" python3 -m ipykernel install --user --name=voicecraft"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# STOP\n",
"You have to do this part manually using the mouse/keyboard and the tabs at the top.\n",
"\n",
"* Refresh your browser to make sure it picks up the new kernel.\n",
"* Kernel -> Change Kernel -> Select Kernel -> voicecraft\n",
"* Kernel -> Restart Kernel -> Yes\n",
"\n",
"Now you can run the rest of the notebook and get an audio sample output. It will automatically download more models and such. The next time you use this container, you can just start below here as the dependencies will remain available until you delete the docker container."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Only do the above if you are using docker"
"### Select 'voicecraft' as the kernel"
]
},
{
@ -280,10 +188,6 @@
"# torchaudio.save(seg_save_fn_gen, gen_audio, codec_audio_sr)\n",
"# torchaudio.save(seg_save_fn_concat, concated_audio, codec_audio_sr)\n",
"\n",
"# if you get error importing T5 in transformers\n",
"# try \n",
"# pip uninstall Pillow\n",
"# pip install Pillow\n",
"# you are might get warnings like WARNING:phonemizer:words count mismatch on 300.0% of the lines (3/1), this can be safely ignored"
]
}

View File

@ -14,11 +14,11 @@ docker run -it -d ^
-e JUPYTER_TOKEN=mytoken ^
-w "/home/%username%" ^
-v "%cd%":"/home/%username%/work" ^
jupyter/base-notebook
voicecraft
if %errorlevel% == 0 (
echo Jupyter container created and running.
echo Jupyter container is running.
echo To access the Jupyter web UI, please follow these steps:
echo 1. Open your web browser

View File

@ -16,7 +16,7 @@ docker run -it \
-e GRANT_SUDO=yes \
-w "/home/${NB_USER}" \
-v "$PWD":"/home/$USER/work" \
jupyter/base-notebook
voicecraft
## `docker logs jupyter` to get the URL link and token e.g.
## http://127.0.0.1:8888/lab?token=blahblahblahblabhlaabhalbhalbhal