Merge branch 'jasonppy:master' into master

This commit is contained in:
zuev-stepan 2024-04-05 02:56:01 +03:00 committed by GitHub
commit bbe3437b8d
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
5 changed files with 46 additions and 113 deletions

26
Dockerfile Normal file
View File

@ -0,0 +1,26 @@
FROM jupyter/base-notebook:python-3.9.13
USER root
# Install OS dependencies
RUN apt-get update && apt-get install -y git-core ffmpeg espeak-ng && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# Update Conda, create the voicecraft environment, and install dependencies
RUN conda update -y -n base -c conda-forge conda && \
conda create -y -n voicecraft python=3.9.16 && \
conda run -n voicecraft conda install -y -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068 && \
conda run -n voicecraft pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft && \
conda run -n voicecraft pip install xformers==0.0.22 && \
conda run -n voicecraft pip install torch==2.0.1 && \
conda run -n voicecraft pip install torchaudio==2.0.2 && \
conda run -n voicecraft pip install tensorboard==2.16.2 && \
conda run -n voicecraft pip install phonemizer==3.2.1 && \
conda run -n voicecraft pip install datasets==2.16.0 && \
conda run -n voicecraft pip install torchmetrics==0.11.1
# Install the Jupyter kernel
RUN conda install -n voicecraft ipykernel --update-deps --force-reinstall -y && \
conda run -n voicecraft python -m ipykernel install --name=voicecraft

View File

@ -21,8 +21,8 @@ To clone or edit an unseen voice, VoiceCraft needs only a few seconds of referen
- [ ] HuggingFace Spaces demo
- [ ] Better guidance on training/finetuning
## How to run TTS inference
There are two ways:
## How to run TTS inference
There are two ways:
1. with docker. see [quickstart](#quickstart)
2. without docker. see [envrionment setup](#environment-setup)
@ -31,7 +31,7 @@ When you are inside the docker image or you have installed all dependencies, Che
If you want to do model development such as training/finetuning, I recommend following [envrionment setup](#environment-setup) and [training](#training).
## QuickStart
:star: To try out TTS inference with VoiceCraft, the best way is using docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
:star: To try out TTS inference with VoiceCraft, the best way is using docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
Tested on Linux and Windows and should work with any host with docker installed.
```bash
@ -43,23 +43,26 @@ cd VoiceCraft
# https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/1.13.5/install-guide.html
# sudo apt-get install -y nvidia-container-toolkit-base || yay -Syu nvidia-container-toolkit || echo etc...
# 3. Try to start an existing container otherwise create a new one passing in all GPUs
# 3. First build the docker image
docker build --tag "voicecraft" .
# 4. Try to start an existing container otherwise create a new one passing in all GPUs
./start-jupyter.sh # linux
start-jupyter.bat # windows
# 4. now open a webpage on the host box to the URL shown at the bottom of:
# 5. now open a webpage on the host box to the URL shown at the bottom of:
docker logs jupyter
# 5. optionally look inside from another terminal
# 6. optionally look inside from another terminal
docker exec -it jupyter /bin/bash
export USER=(your_linux_username_used_above)
export HOME=/home/$USER
sudo apt-get update
# 6. confirm video card(s) are visible inside container
# 7. confirm video card(s) are visible inside container
nvidia-smi
# 7. Now in browser, open inference_tts.ipynb and work through one cell at a time
# 8. Now in browser, open inference_tts.ipynb and work through one cell at a time
echo GOOD LUCK
```
@ -121,13 +124,13 @@ Long TTS mode: Easy TTS on long texts
## Training
To train an VoiceCraft model, you need to prepare the following parts:
To train an VoiceCraft model, you need to prepare the following parts:
1. utterances and their transcripts
2. encode the utterances into codes using e.g. Encodec
3. convert transcripts into phoneme sequence, and a phoneme set (we named it vocab.txt)
4. manifest (i.e. metadata)
Step 1,2,3 are handled in [./data/phonemize_encodec_encode_hf.py](./data/phonemize_encodec_encode_hf.py), where
Step 1,2,3 are handled in [./data/phonemize_encodec_encode_hf.py](./data/phonemize_encodec_encode_hf.py), where
1. Gigaspeech is downloaded through HuggingFace. Note that you need to sign an agreement in order to download the dataset (it needs your auth token)
2. phoneme sequence and encodec codes are also extracted using the script.
@ -149,7 +152,7 @@ python phonemize_encodec_encode_hf.py \
where encodec_model_path is avaliable [here](https://huggingface.co/pyp1/VoiceCraft). This model is trained on Gigaspeech XL, it has 56M parameters, 4 codebooks, each codebook has 2048 codes. Details are described in our [paper](https://jasonppy.github.io/assets/pdfs/VoiceCraft.pdf). If you encounter OOM during extraction, try decrease the batch_size and/or max_len.
The extracted codes, phonemes, and vocab.txt will be stored at `path/to/store_extracted_codes_and_phonemes/${dataset_size}/{encodec_16khz_4codebooks,phonemes,vocab.txt}`.
As for manifest, please download train.txt and validation.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main), and put them under `path/to/store_extracted_codes_and_phonemes/manifest/`. Please also download vocab.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main) if you want to use our pretrained VoiceCraft model (so that the phoneme-to-token matching is the same).
As for manifest, please download train.txt and validation.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main), and put them under `path/to/store_extracted_codes_and_phonemes/manifest/`. Please also download vocab.txt from [here](https://huggingface.co/datasets/pyp1/VoiceCraft_RealEdit/tree/main) if you want to use our pretrained VoiceCraft model (so that the phoneme-to-token matching is the same).
Now, you are good to start training!
@ -168,7 +171,7 @@ first install it with `pip install g2p`
```python
from g2p import make_g2p
transducer = make_g2p('eng', 'eng-ipa')
transducer("hello").output_string
transducer("hello").output_string
# it will output: 'hʌloʊ'
``` -->

View File

@ -5,106 +5,14 @@
"metadata": {},
"source": [
"VoiceCraft Inference Text To Speech Demo\n",
"===\n",
"This will install a ton of dependencies all over so consider using the provided docker container start-jupyter script to keep the cruft off your dev box.\n",
"\n",
"Run the next cells one at a time up until the *STOP* and follow those instructions before continuing. You only have to do this the first time to setup the container."
"==="
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Only do the below if you are using docker"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# install OS deps\n",
"!sudo apt-get update && sudo apt-get install -y \\\n",
" git-core \\\n",
" ffmpeg \\\n",
" espeak-ng"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Update and setup Conda voicecraft environment\n",
"!conda update -y -n base -c conda-forge conda\n",
"!conda create -y -n voicecraft python=3.9.16 && \\\n",
" conda init bash"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# install conda and pip stuff in the activated conda above context\n",
"!echo -e \"Grab a cup a coffee and a slice of pizza...\\n\\n\"\n",
"\n",
"# make sure $HOME and $USER are setup so this will source the conda environment\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -c conda-forge montreal-forced-aligner=2.2.17 openfst=1.8.2 kaldi=5.5.1068 && \\\n",
" pip install torch==2.0.1 && \\\n",
" pip install tensorboard==2.16.2 && \\\n",
" pip install phonemizer==3.2.1 && \\\n",
" pip install torchaudio==2.0.2 && \\\n",
" pip install datasets==2.16.0 && \\\n",
" pip install torchmetrics==0.11.1\n",
"\n",
"# do this one last otherwise you'll get an error about torch compiler missing due to xformer mismatch\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# okay setup the conda environment such that jupyter notebook can find the kernel\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" conda install -y -n voicecraft ipykernel --update-deps --force-reinstall\n",
"\n",
"# installs the Jupyter kernel into /home/myusername/.local/share/jupyter/kernels/voicecraft\n",
"!source ~/.bashrc && \\\n",
" conda activate voicecraft && \\\n",
" python3 -m ipykernel install --user --name=voicecraft"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# STOP\n",
"You have to do this part manually using the mouse/keyboard and the tabs at the top.\n",
"\n",
"* Refresh your browser to make sure it picks up the new kernel.\n",
"* Kernel -> Change Kernel -> Select Kernel -> voicecraft\n",
"* Kernel -> Restart Kernel -> Yes\n",
"\n",
"Now you can run the rest of the notebook and get an audio sample output. It will automatically download more models and such. The next time you use this container, you can just start below here as the dependencies will remain available until you delete the docker container."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Only do the above if you are using docker"
"### Select 'voicecraft' as the kernel"
]
},
{
@ -280,10 +188,6 @@
"# torchaudio.save(seg_save_fn_gen, gen_audio, codec_audio_sr)\n",
"# torchaudio.save(seg_save_fn_concat, concated_audio, codec_audio_sr)\n",
"\n",
"# if you get error importing T5 in transformers\n",
"# try \n",
"# pip uninstall Pillow\n",
"# pip install Pillow\n",
"# you are might get warnings like WARNING:phonemizer:words count mismatch on 300.0% of the lines (3/1), this can be safely ignored"
]
}

View File

@ -14,11 +14,11 @@ docker run -it -d ^
-e JUPYTER_TOKEN=mytoken ^
-w "/home/%username%" ^
-v "%cd%":"/home/%username%/work" ^
jupyter/base-notebook
voicecraft
if %errorlevel% == 0 (
echo Jupyter container created and running.
echo Jupyter container is running.
echo To access the Jupyter web UI, please follow these steps:
echo 1. Open your web browser

View File

@ -16,7 +16,7 @@ docker run -it \
-e GRANT_SUDO=yes \
-w "/home/${NB_USER}" \
-v "$PWD":"/home/$USER/work" \
jupyter/base-notebook
voicecraft
## `docker logs jupyter` to get the URL link and token e.g.
## http://127.0.0.1:8888/lab?token=blahblahblahblabhlaabhalbhalbhal