Compare commits
51 Commits
8814295e98
...
0bf07d2214
Author | SHA1 | Date |
---|---|---|
Pranay Gosar | 0bf07d2214 | |
Pranay Gosar | dc2239c58b | |
Pranay Gosar | 00d1b11a13 | |
Pranay Gosar | da8c441762 | |
pyp_l40 | 013a21c70d | |
pyp_l40 | ef9d65433c | |
Pranay Gosar | 1a896d21fe | |
pyp_l40 | 77d1d5a69c | |
Puyuan Peng | da6d34e26e | |
Martin Morávek | 47f808df4c | |
pyp_l40 | fd20265324 | |
pgosar | 9fb6d948d0 | |
Pranay Gosar | 1850da9210 | |
Pranay Gosar | 59877c085e | |
Pranay Gosar | b8bb2ab592 | |
Pranay Gosar | 63736f7269 | |
Pranay Gosar | fc4de13071 | |
pyp_l40 | 4a3a8f11a7 | |
pyp_l40 | 8d1177149b | |
pyp_l40 | 4ff9930b8e | |
pyp_l40 | 96f6f9fc7a | |
chenxwh | ee3955d57e | |
chenxwh | 87f4fa5d21 | |
pyp_l40 | eb8d89f618 | |
pyp_l40 | 9a50faf45b | |
pyp_l40 | a39f426212 | |
pyp_l40 | eb4c6f62f4 | |
pyp_l40 | ce39ca89c1 | |
jason-on-salt-a40 | b10a245b44 | |
pyp_l40 | 13e52470c3 | |
jason-on-salt-a40 | 98a8abd4dd | |
jason-on-salt-a40 | 8e19cf17ea | |
chenxwh | 2a2ee984b6 | |
chenxwh | 729d0ec69e | |
chenxwh | ef3dd8285b | |
chenxwh | 9746a1f60c | |
Chenxi | 4bd7b83b57 | |
yoesak | 160cef0186 | |
yoesak | caf60a4ce7 | |
jason-on-salt-a40 | 7efcb3ee66 | |
pgosar | 1e0eaeba2b | |
Puyuan Peng | ddfef83331 | |
Chenxi | 6e5382584c | |
chenxwh | 0da8ee4b7a | |
Chenxi | e3fc926ca4 | |
chenxwh | 0c6942fd2a | |
chenxwh | f649f9216b | |
Chenxi | 1e2f8391a7 | |
chenxwh | b8eca5a2d4 | |
chenxwh | 023d4b1c6c | |
chenxwh | 49a648fa54 |
|
@ -0,0 +1,17 @@
|
|||
# The .dockerignore file excludes files from the container build process.
|
||||
#
|
||||
# https://docs.docker.com/engine/reference/builder/#dockerignore-file
|
||||
|
||||
# Exclude Git files
|
||||
.git
|
||||
.github
|
||||
.gitignore
|
||||
|
||||
# Exclude Python cache files
|
||||
__pycache__
|
||||
.mypy_cache
|
||||
.pytest_cache
|
||||
.ruff_cache
|
||||
|
||||
# Exclude Python virtual environment
|
||||
/venv
|
|
@ -17,6 +17,7 @@ thumbs.db
|
|||
*.mp3
|
||||
*.pth
|
||||
*.th
|
||||
*.json
|
||||
|
||||
*durip*
|
||||
*rtx*
|
||||
|
@ -26,4 +27,7 @@ thumbs.db
|
|||
src/audiocraft
|
||||
|
||||
!/demo/
|
||||
!/demo/*
|
||||
!/demo/*
|
||||
/demo/temp/*.txt
|
||||
!/demo/temp/84_121550_000074_000000.txt
|
||||
.cog/tmp/*
|
28
README.md
28
README.md
|
@ -1,5 +1,6 @@
|
|||
# VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild
|
||||
[![Paper](https://img.shields.io/badge/arXiv-2301.12503-brightgreen.svg?style=flat-square)](https://jasonppy.github.io/assets/pdfs/VoiceCraft.pdf) [![githubio](https://img.shields.io/badge/GitHub.io-Audio_Samples-blue?logo=Github&style=flat-square)](https://jasonppy.github.io/VoiceCraft_web/) [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IOjpglQyMTO2C3Y94LD9FY0Ocn-RJRg6?usp=sharing)
|
||||
[![Paper](https://img.shields.io/badge/arXiv-2403.16973-brightgreen.svg?style=flat-square)](https://arxiv.org/pdf/2403.16973.pdf) [![HuggingFace](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio) [![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1IOjpglQyMTO2C3Y94LD9FY0Ocn-RJRg6?usp=sharing) [![Replicate](https://replicate.com/cjwbw/voicecraft/badge)](https://replicate.com/cjwbw/voicecraft) [![YouTube demo](https://img.shields.io/youtube/views/eikybOi8iwU)](https://youtu.be/eikybOi8iwU) [![Demo page](https://img.shields.io/badge/Audio_Samples-blue?logo=Github&style=flat-square)](https://jasonppy.github.io/VoiceCraft_web/)
|
||||
|
||||
|
||||
### TL;DR
|
||||
VoiceCraft is a token infilling neural codec language model, that achieves state-of-the-art performance on both **speech editing** and **zero-shot text-to-speech (TTS)** on in-the-wild data including audiobooks, internet videos, and podcasts.
|
||||
|
@ -12,13 +13,17 @@ There are three ways (besides running Gradio in Colab):
|
|||
1. More flexible inference beyond Gradio UI in Google Colab. see [quickstart colab](#quickstart-colab)
|
||||
2. with docker. see [quickstart docker](#quickstart-docker)
|
||||
3. without docker. see [environment setup](#environment-setup). You can also run gradio locally if you choose this option
|
||||
4. As a standalone script that you can easily integrate into other projects.
|
||||
see [quickstart command line](#quickstart-command-line).
|
||||
|
||||
When you are inside the docker image or you have installed all dependencies, Checkout [`inference_tts.ipynb`](./inference_tts.ipynb).
|
||||
|
||||
If you want to do model development such as training/finetuning, I recommend following [envrionment setup](#environment-setup) and [training](#training).
|
||||
|
||||
## News
|
||||
:star: 04/11/2024: VoiceCraft Gradio is now available on HuggingFace Spaces [here](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio)! Major thanks to [@zuev-stepan](https://github.com/zuev-stepan), [@Sewlell](https://github.com/Sewlell), [@pgsoar](https://github.com/pgosar) [@Ph0rk0z](https://github.com/Ph0rk0z).
|
||||
:star: 04/22/2024: 330M/830M TTS Enhanced Models are up [here](https://huggingface.co/pyp1), load them through [`gradio_app.py`](./gradio_app.py) or [`inference_tts.ipynb`](./inference_tts.ipynb)! Replicate demo is up, major thanks to [@chenxwh](https://github.com/chenxwh)!
|
||||
|
||||
:star: 04/11/2024: VoiceCraft Gradio is now available on HuggingFace Spaces [here](https://huggingface.co/spaces/pyp1/VoiceCraft_gradio)! Major thanks to [@zuev-stepan](https://github.com/zuev-stepan), [@Sewlell](https://github.com/Sewlell), [@pgsoar](https://github.com/pgosar) [@Ph0rk0z](https://github.com/Ph0rk0z).
|
||||
|
||||
:star: 04/05/2024: I finetuned giga330M with the TTS objective on gigaspeech and 1/5 of librilight. Weights are [here](https://huggingface.co/pyp1/VoiceCraft/tree/main). Make sure maximal prompt + generation length <= 16 seconds (due to our limited compute, we had to drop utterances longer than 16s in training data). Even stronger models forthcomming, stay tuned!
|
||||
|
||||
|
@ -30,15 +35,13 @@ If you want to do model development such as training/finetuning, I recommend fol
|
|||
- [x] Inference demo for speech editing and TTS
|
||||
- [x] Training guidance
|
||||
- [x] RealEdit dataset and training manifest
|
||||
- [x] Model weights (giga330M.pth, giga830M.pth, and gigaHalfLibri330M_TTSEnhanced_max16s.pth)
|
||||
- [x] Model weights
|
||||
- [x] Better guidance on training/finetuning
|
||||
- [x] Colab notebooks
|
||||
- [x] HuggingFace Spaces demo
|
||||
- [ ] Command line
|
||||
- [x] Command line
|
||||
- [ ] Improve efficiency
|
||||
|
||||
|
||||
|
||||
## QuickStart Colab
|
||||
|
||||
:star: To try out speech editing or TTS Inference with VoiceCraft, the simplest way is using Google Colab.
|
||||
|
@ -47,6 +50,15 @@ Instructions to run are on the Colab itself.
|
|||
1. To try [Speech Editing](https://colab.research.google.com/drive/1FV7EC36dl8UioePY1xXijXTMl7X47kR_?usp=sharing)
|
||||
2. To try [TTS Inference](https://colab.research.google.com/drive/1lch_6it5-JpXgAQlUTRRI2z2_rk5K67Z?usp=sharing)
|
||||
|
||||
## QuickStart Command Line
|
||||
|
||||
:star: To use it as a standalone script, check out tts_demo.py and speech_editing_demo.py.
|
||||
Be sure to first [setup your environment](#environment-setup).
|
||||
Without arguments, they will run the standard demo arguments used as an example elsewhere
|
||||
in this repository. You can use the command line arguments to specify unique input audios,
|
||||
target transcripts, and inference hyperparameters. Run the help command for more information:
|
||||
`python3 tts_demo.py -h`
|
||||
|
||||
## QuickStart Docker
|
||||
:star: To try out TTS inference with VoiceCraft, you can also use docker. Thank [@ubergarm](https://github.com/ubergarm) and [@jayc88](https://github.com/jay-c88) for making this happen.
|
||||
|
||||
|
@ -194,7 +206,7 @@ cd ./z_scripts
|
|||
bash e830M.sh
|
||||
```
|
||||
|
||||
It's the same procedure to prepare your own custom dataset. Make sure that if
|
||||
It's the same procedure to prepare your own custom dataset. Make sure that if
|
||||
|
||||
## Finetuning
|
||||
You also need to do step 1-4 as Training, and I recommend to use AdamW for optimization if you finetune a pretrained model for better stability. checkout script `./z_scripts/e830M_ft.sh`.
|
||||
|
@ -210,7 +222,7 @@ We thank Feiteng for his [VALL-E reproduction](https://github.com/lifeiteng/vall
|
|||
## Citation
|
||||
```
|
||||
@article{peng2024voicecraft,
|
||||
author = {Peng, Puyuan and Huang, Po-Yao and Li, Daniel and Mohamed, Abdelrahman and Harwath, David},
|
||||
author = {Peng, Puyuan and Huang, Po-Yao and Mohamed, Abdelrahman and Harwath, David},
|
||||
title = {VoiceCraft: Zero-Shot Speech Editing and Text-to-Speech in the Wild},
|
||||
journal = {arXiv},
|
||||
year = {2024},
|
||||
|
|
|
@ -0,0 +1,24 @@
|
|||
# Configuration for Cog ⚙️
|
||||
# Reference: https://github.com/replicate/cog/blob/main/docs/yaml.md
|
||||
|
||||
build:
|
||||
gpu: true
|
||||
system_packages:
|
||||
- libgl1-mesa-glx
|
||||
- libglib2.0-0
|
||||
- ffmpeg
|
||||
- espeak-ng
|
||||
python_version: "3.11"
|
||||
python_packages:
|
||||
- torch==2.1.0
|
||||
- torchaudio==2.1.0
|
||||
- xformers
|
||||
- phonemizer==3.2.1
|
||||
- whisperx==3.1.1
|
||||
- openai-whisper>=20231117
|
||||
run:
|
||||
- git clone https://github.com/facebookresearch/audiocraft && pip install -e ./audiocraft
|
||||
- pip install "pydantic<2.0.0"
|
||||
- curl -o /usr/local/bin/pget -L "https://github.com/replicate/pget/releases/download/v0.6.0/pget_linux_x86_64" && chmod +x /usr/local/bin/pget
|
||||
- mkdir -p /root/.cache/torch/hub/checkpoints/ && wget --output-document "/root/.cache/torch/hub/checkpoints/wav2vec2_fairseq_base_ls960_asr_ls960.pth" "https://download.pytorch.org/torchaudio/models/wav2vec2_fairseq_base_ls960_asr_ls960.pth"
|
||||
predict: "predict.py:Predictor"
|
Binary file not shown.
|
@ -0,0 +1,106 @@
|
|||
Begin,End,Label,Type,Speaker
|
||||
0.04,0.58,gwynplaine,words,temp
|
||||
0.58,0.94,had,words,temp
|
||||
0.94,1.45,besides,words,temp
|
||||
1.45,1.62,for,words,temp
|
||||
1.62,1.86,his,words,temp
|
||||
1.86,2.16,work,words,temp
|
||||
2.16,2.31,and,words,temp
|
||||
2.31,2.49,for,words,temp
|
||||
2.49,2.71,his,words,temp
|
||||
2.71,3.03,feats,words,temp
|
||||
3.03,3.12,of,words,temp
|
||||
3.12,3.61,strength,words,temp
|
||||
3.95,4.25,round,words,temp
|
||||
4.25,4.45,his,words,temp
|
||||
4.45,4.7,neck,words,temp
|
||||
4.7,4.81,and,words,temp
|
||||
4.81,5.04,over,words,temp
|
||||
5.04,5.22,his,words,temp
|
||||
5.22,5.83,shoulders,words,temp
|
||||
6.16,6.31,an,words,temp
|
||||
6.41,7.15,esclavine,words,temp
|
||||
7.15,7.29,of,words,temp
|
||||
7.29,7.7,leather,words,temp
|
||||
0.04,0.1,G,phones,temp
|
||||
0.1,0.13,W,phones,temp
|
||||
0.13,0.22,IH1,phones,temp
|
||||
0.22,0.3,N,phones,temp
|
||||
0.3,0.38,P,phones,temp
|
||||
0.38,0.42,L,phones,temp
|
||||
0.42,0.53,EY1,phones,temp
|
||||
0.53,0.58,N,phones,temp
|
||||
0.58,0.71,HH,phones,temp
|
||||
0.71,0.86,AE1,phones,temp
|
||||
0.86,0.94,D,phones,temp
|
||||
0.94,0.97,B,phones,temp
|
||||
0.97,1.01,IH0,phones,temp
|
||||
1.01,1.14,S,phones,temp
|
||||
1.14,1.34,AY1,phones,temp
|
||||
1.34,1.4,D,phones,temp
|
||||
1.4,1.45,Z,phones,temp
|
||||
1.45,1.52,F,phones,temp
|
||||
1.52,1.55,AO1,phones,temp
|
||||
1.55,1.62,R,phones,temp
|
||||
1.62,1.69,HH,phones,temp
|
||||
1.69,1.76,IH1,phones,temp
|
||||
1.76,1.86,Z,phones,temp
|
||||
1.86,1.95,W,phones,temp
|
||||
1.95,2.07,ER1,phones,temp
|
||||
2.07,2.16,K,phones,temp
|
||||
2.16,2.23,AH0,phones,temp
|
||||
2.23,2.26,N,phones,temp
|
||||
2.26,2.31,D,phones,temp
|
||||
2.31,2.38,F,phones,temp
|
||||
2.38,2.41,AO1,phones,temp
|
||||
2.41,2.49,R,phones,temp
|
||||
2.49,2.55,HH,phones,temp
|
||||
2.55,2.62,IH1,phones,temp
|
||||
2.62,2.71,Z,phones,temp
|
||||
2.71,2.8,F,phones,temp
|
||||
2.8,2.9,IY1,phones,temp
|
||||
2.9,2.98,T,phones,temp
|
||||
2.98,3.03,S,phones,temp
|
||||
3.03,3.07,AH0,phones,temp
|
||||
3.07,3.12,V,phones,temp
|
||||
3.12,3.2,S,phones,temp
|
||||
3.2,3.26,T,phones,temp
|
||||
3.26,3.32,R,phones,temp
|
||||
3.32,3.39,EH1,phones,temp
|
||||
3.39,3.48,NG,phones,temp
|
||||
3.48,3.53,K,phones,temp
|
||||
3.53,3.61,TH,phones,temp
|
||||
3.95,4.03,R,phones,temp
|
||||
4.03,4.16,AW1,phones,temp
|
||||
4.16,4.21,N,phones,temp
|
||||
4.21,4.25,D,phones,temp
|
||||
4.25,4.29,HH,phones,temp
|
||||
4.29,4.36,IH1,phones,temp
|
||||
4.36,4.45,Z,phones,temp
|
||||
4.45,4.53,N,phones,temp
|
||||
4.53,4.62,EH1,phones,temp
|
||||
4.62,4.7,K,phones,temp
|
||||
4.7,4.74,AH0,phones,temp
|
||||
4.74,4.77,N,phones,temp
|
||||
4.77,4.81,D,phones,temp
|
||||
4.81,4.92,OW1,phones,temp
|
||||
4.92,4.97,V,phones,temp
|
||||
4.97,5.04,ER0,phones,temp
|
||||
5.04,5.11,HH,phones,temp
|
||||
5.11,5.18,IH1,phones,temp
|
||||
5.18,5.22,Z,phones,temp
|
||||
5.22,5.34,SH,phones,temp
|
||||
5.34,5.47,OW1,phones,temp
|
||||
5.47,5.51,L,phones,temp
|
||||
5.51,5.58,D,phones,temp
|
||||
5.58,5.71,ER0,phones,temp
|
||||
5.71,5.83,Z,phones,temp
|
||||
6.16,6.23,AE1,phones,temp
|
||||
6.23,6.31,N,phones,temp
|
||||
6.41,7.15,spn,phones,temp
|
||||
7.15,7.21,AH0,phones,temp
|
||||
7.21,7.29,V,phones,temp
|
||||
7.29,7.36,L,phones,temp
|
||||
7.36,7.44,EH1,phones,temp
|
||||
7.44,7.49,DH,phones,temp
|
||||
7.49,7.7,ER0,phones,temp
|
|
|
@ -308,7 +308,7 @@ dependencies:
|
|||
- h11==0.14.0
|
||||
- httpcore==1.0.4
|
||||
- httpx==0.27.0
|
||||
- huggingface-hub==0.22.4
|
||||
- huggingface-hub==0.22.2
|
||||
- hydra-colorlog==1.2.0
|
||||
- hydra-core==1.3.2
|
||||
- ipython==8.12.3
|
||||
|
|
103
gradio_app.py
103
gradio_app.py
|
@ -1,4 +1,6 @@
|
|||
import os
|
||||
import re
|
||||
from num2words import num2words
|
||||
import gradio as gr
|
||||
import torch
|
||||
import torchaudio
|
||||
|
@ -11,7 +13,8 @@ import io
|
|||
import numpy as np
|
||||
import random
|
||||
import uuid
|
||||
|
||||
import nltk
|
||||
nltk.download('punkt')
|
||||
|
||||
DEMO_PATH = os.getenv("DEMO_PATH", "./demo")
|
||||
TMP_PATH = os.getenv("TMP_PATH", "./demo/temp")
|
||||
|
@ -71,14 +74,22 @@ class WhisperxModel:
|
|||
|
||||
def transcribe(self, audio_path):
|
||||
segments = self.model.transcribe(audio_path, batch_size=8)["segments"]
|
||||
for segment in segments:
|
||||
segment['text'] = replace_numbers_with_words(segment['text'])
|
||||
return self.align_model.align(segments, audio_path)
|
||||
|
||||
|
||||
def load_models(whisper_backend_name, whisper_model_name, alignment_model_name, voicecraft_model_name):
|
||||
global transcribe_model, align_model, voicecraft_model
|
||||
|
||||
if voicecraft_model_name == "giga330M_TTSEnhanced":
|
||||
voicecraft_model_name = "gigaHalfLibri330M_TTSEnhanced_max16s"
|
||||
if voicecraft_model_name == "330M":
|
||||
voicecraft_model_name = "giga330M"
|
||||
elif voicecraft_model_name == "830M":
|
||||
voicecraft_model_name = "giga830M"
|
||||
elif voicecraft_model_name == "330M_TTSEnhanced":
|
||||
voicecraft_model_name = "330M_TTSEnhanced"
|
||||
elif voicecraft_model_name == "830M_TTSEnhanced":
|
||||
voicecraft_model_name = "830M_TTSEnhanced"
|
||||
|
||||
if alignment_model_name is not None:
|
||||
align_model = WhisperxAlignModel()
|
||||
|
@ -99,7 +110,7 @@ def load_models(whisper_backend_name, whisper_model_name, alignment_model_name,
|
|||
|
||||
encodec_fn = f"{MODELS_PATH}/encodec_4cb2048_giga.th"
|
||||
if not os.path.exists(encodec_fn):
|
||||
os.system(f"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/encodec_4cb2048_giga.th")
|
||||
os.system(f"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/encodec_4cb2048_giga.th -O " + encodec_fn)
|
||||
|
||||
voicecraft_model = {
|
||||
"config": config,
|
||||
|
@ -113,9 +124,11 @@ def load_models(whisper_backend_name, whisper_model_name, alignment_model_name,
|
|||
|
||||
def get_transcribe_state(segments):
|
||||
words_info = [word_info for segment in segments for word_info in segment["words"]]
|
||||
transcript = " ".join([segment["text"] for segment in segments])
|
||||
transcript = transcript[1:] if transcript[0] == " " else transcript
|
||||
return {
|
||||
"segments": segments,
|
||||
"transcript": " ".join([segment["text"] for segment in segments]),
|
||||
"transcript": transcript,
|
||||
"words_info": words_info,
|
||||
"transcript_with_start_time": " ".join([f"{word['start']} {word['word']}" for word in words_info]),
|
||||
"transcript_with_end_time": " ".join([f"{word['word']} {word['end']}" for word in words_info]),
|
||||
|
@ -166,7 +179,7 @@ def align(seed, transcript, audio_path):
|
|||
if align_model is None:
|
||||
raise gr.Error("Align model not loaded")
|
||||
seed_everything(seed)
|
||||
|
||||
transcript = replace_numbers_with_words(transcript).replace(" ", " ").replace(" ", " ")
|
||||
fragments = align_segments(transcript, audio_path)
|
||||
segments = [{
|
||||
"start": float(fragment["begin"]),
|
||||
|
@ -192,6 +205,15 @@ def get_output_audio(audio_tensors, codec_audio_sr):
|
|||
buffer.seek(0)
|
||||
return buffer.read()
|
||||
|
||||
def replace_numbers_with_words(sentence):
|
||||
sentence = re.sub(r'(\d+)', r' \1 ', sentence) # add spaces around numbers
|
||||
def replace_with_words(match):
|
||||
num = match.group(0)
|
||||
try:
|
||||
return num2words(num) # Convert numbers to words
|
||||
except:
|
||||
return num # In case num2words fails (unlikely with digits but just to be safe)
|
||||
return re.sub(r'\b\d+\b', replace_with_words, sentence) # Regular expression that matches numbers
|
||||
|
||||
def run(seed, left_margin, right_margin, codec_audio_sr, codec_sr, top_k, top_p, temperature,
|
||||
stop_repetition, sample_batch_size, kvcache, silence_tokens,
|
||||
|
@ -204,6 +226,8 @@ def run(seed, left_margin, right_margin, codec_audio_sr, codec_sr, top_k, top_p,
|
|||
raise gr.Error("Can't use smart transcript: whisper transcript not found")
|
||||
|
||||
seed_everything(seed)
|
||||
transcript = replace_numbers_with_words(transcript).replace(" ", " ").replace(" ", " ") # replace numbers with words, so that the phonemizer can do a better job
|
||||
|
||||
if mode == "Long TTS":
|
||||
if split_text == "Newline":
|
||||
sentences = transcript.split('\n')
|
||||
|
@ -362,50 +386,32 @@ If disabled, you should write the target transcript yourself:</br>
|
|||
- In Edit mode write full prompt</br>
|
||||
"""
|
||||
|
||||
demo_original_transcript = " But when I had approached so near to them, the common object, which the sense deceives, lost not by distance any of its marks."
|
||||
demo_original_transcript = "Gwynplaine had, besides, for his work and for his feats of strength, round his neck and over his shoulders, an esclavine of leather."
|
||||
|
||||
demo_text = {
|
||||
"TTS": {
|
||||
"smart": "I cannot believe that the same model can also do text to speech synthesis too!",
|
||||
"regular": "But when I had approached so near to them, the common I cannot believe that the same model can also do text to speech synthesis too!"
|
||||
"regular": "Gwynplaine had, besides, for his work and for his feats of strength, I cannot believe that the same model can also do text to speech synthesis too!"
|
||||
},
|
||||
"Edit": {
|
||||
"smart": "saw the mirage of the lake in the distance,",
|
||||
"regular": "But when I saw the mirage of the lake in the distance, which the sense deceives, Lost not by distance any of its marks,"
|
||||
"smart": "take over the stage for half an hour,",
|
||||
"regular": "Gwynplaine had, besides, for his work and for his feats of strength, take over the stage for half an hour, an esclavine of leather."
|
||||
},
|
||||
"Long TTS": {
|
||||
"smart": "You can run the model on a big text!\n"
|
||||
"Just write it line-by-line. Or sentence-by-sentence.\n"
|
||||
"If some sentences sound odd, just rerun the model on them, no need to generate the whole text again!",
|
||||
"regular": "But when I had approached so near to them, the common You can run the model on a big text!\n"
|
||||
"But when I had approached so near to them, the common Just write it line-by-line. Or sentence-by-sentence.\n"
|
||||
"But when I had approached so near to them, the common If some sentences sound odd, just rerun the model on them, no need to generate the whole text again!"
|
||||
"regular": "Gwynplaine had, besides, for his work and for his feats of strength, You can run the model on a big text!\n"
|
||||
"Gwynplaine had, besides, for his work and for his feats of strength, Just write it line-by-line. Or sentence-by-sentence.\n"
|
||||
"Gwynplaine had, besides, for his work and for his feats of strength, If some sentences sound odd, just rerun the model on them, no need to generate the whole text again!"
|
||||
}
|
||||
}
|
||||
|
||||
all_demo_texts = {vv for k, v in demo_text.items() for kk, vv in v.items()}
|
||||
|
||||
demo_words = [
|
||||
'0.029 But 0.149', '0.189 when 0.33', '0.43 I 0.49', '0.53 had 0.65', '0.711 approached 1.152', '1.352 so 1.593',
|
||||
'1.693 near 1.933', '1.994 to 2.074', '2.134 them, 2.354', '2.535 the 2.655', '2.695 common 3.016', '3.196 object, 3.577',
|
||||
'3.717 which 3.898', '3.958 the 4.058', '4.098 sense 4.359', '4.419 deceives, 4.92', '5.101 lost 5.481', '5.682 not 5.963',
|
||||
'6.043 by 6.183', '6.223 distance 6.644', '6.905 any 7.065', '7.125 of 7.185', '7.245 its 7.346', '7.406 marks. 7.727'
|
||||
]
|
||||
demo_words = ['0.069 Gwynplain 0.611', '0.671 had, 0.912', '0.952 besides, 1.414', '1.494 for 1.634', '1.695 his 1.835', '1.915 work 2.136', '2.196 and 2.297', '2.337 for 2.517', '2.557 his 2.678', '2.758 feats 3.019', '3.079 of 3.139', '3.2 strength, 3.561', '4.022 round 4.263', '4.303 his 4.444', '4.524 neck 4.705', '4.745 and 4.825', '4.905 over 5.086', '5.146 his 5.266', '5.307 shoulders, 5.768', '6.23 an 6.33', '6.531 esclavine 7.133', '7.213 of 7.293', '7.353 leather. 7.614']
|
||||
|
||||
demo_words_info = [
|
||||
{'word': 'But', 'start': 0.029, 'end': 0.149, 'score': 0.834}, {'word': 'when', 'start': 0.189, 'end': 0.33, 'score': 0.879},
|
||||
{'word': 'I', 'start': 0.43, 'end': 0.49, 'score': 0.984}, {'word': 'had', 'start': 0.53, 'end': 0.65, 'score': 0.998},
|
||||
{'word': 'approached', 'start': 0.711, 'end': 1.152, 'score': 0.822}, {'word': 'so', 'start': 1.352, 'end': 1.593, 'score': 0.822},
|
||||
{'word': 'near', 'start': 1.693, 'end': 1.933, 'score': 0.752}, {'word': 'to', 'start': 1.994, 'end': 2.074, 'score': 0.924},
|
||||
{'word': 'them,', 'start': 2.134, 'end': 2.354, 'score': 0.914}, {'word': 'the', 'start': 2.535, 'end': 2.655, 'score': 0.818},
|
||||
{'word': 'common', 'start': 2.695, 'end': 3.016, 'score': 0.971}, {'word': 'object,', 'start': 3.196, 'end': 3.577, 'score': 0.823},
|
||||
{'word': 'which', 'start': 3.717, 'end': 3.898, 'score': 0.701}, {'word': 'the', 'start': 3.958, 'end': 4.058, 'score': 0.798},
|
||||
{'word': 'sense', 'start': 4.098, 'end': 4.359, 'score': 0.797}, {'word': 'deceives,', 'start': 4.419, 'end': 4.92, 'score': 0.802},
|
||||
{'word': 'lost', 'start': 5.101, 'end': 5.481, 'score': 0.71}, {'word': 'not', 'start': 5.682, 'end': 5.963, 'score': 0.781},
|
||||
{'word': 'by', 'start': 6.043, 'end': 6.183, 'score': 0.834}, {'word': 'distance', 'start': 6.223, 'end': 6.644, 'score': 0.899},
|
||||
{'word': 'any', 'start': 6.905, 'end': 7.065, 'score': 0.893}, {'word': 'of', 'start': 7.125, 'end': 7.185, 'score': 0.772},
|
||||
{'word': 'its', 'start': 7.245, 'end': 7.346, 'score': 0.778}, {'word': 'marks.', 'start': 7.406, 'end': 7.727, 'score': 0.955}
|
||||
]
|
||||
demo_words_info = [{'word': 'Gwynplain', 'start': 0.069, 'end': 0.611, 'score': 0.833}, {'word': 'had,', 'start': 0.671, 'end': 0.912, 'score': 0.879}, {'word': 'besides,', 'start': 0.952, 'end': 1.414, 'score': 0.863}, {'word': 'for', 'start': 1.494, 'end': 1.634, 'score': 0.89}, {'word': 'his', 'start': 1.695, 'end': 1.835, 'score': 0.669}, {'word': 'work', 'start': 1.915, 'end': 2.136, 'score': 0.916}, {'word': 'and', 'start': 2.196, 'end': 2.297, 'score': 0.766}, {'word': 'for', 'start': 2.337, 'end': 2.517, 'score': 0.808}, {'word': 'his', 'start': 2.557, 'end': 2.678, 'score': 0.786}, {'word': 'feats', 'start': 2.758, 'end': 3.019, 'score': 0.97}, {'word': 'of', 'start': 3.079, 'end': 3.139, 'score': 0.752}, {'word': 'strength,', 'start': 3.2, 'end': 3.561, 'score': 0.742}, {'word': 'round', 'start': 4.022, 'end': 4.263, 'score': 0.916}, {'word': 'his', 'start': 4.303, 'end': 4.444, 'score': 0.666}, {'word': 'neck', 'start': 4.524, 'end': 4.705, 'score': 0.908}, {'word': 'and', 'start': 4.745, 'end': 4.825, 'score': 0.882}, {'word': 'over', 'start': 4.905, 'end': 5.086, 'score': 0.847}, {'word': 'his', 'start': 5.146, 'end': 5.266, 'score': 0.791}, {'word': 'shoulders,', 'start': 5.307, 'end': 5.768, 'score': 0.729}, {'word': 'an', 'start': 6.23, 'end': 6.33, 'score': 0.854}, {'word': 'esclavine', 'start': 6.531, 'end': 7.133, 'score': 0.803}, {'word': 'of', 'start': 7.213, 'end': 7.293, 'score': 0.772}, {'word': 'leather.', 'start': 7.353, 'end': 7.614, 'score': 0.896}]
|
||||
|
||||
|
||||
def update_demo(mode, smart_transcript, edit_word_mode, transcript, edit_from_word, edit_to_word):
|
||||
|
@ -432,19 +438,19 @@ def get_app():
|
|||
with gr.Column(scale=5):
|
||||
with gr.Accordion("Select models", open=False) as models_selector:
|
||||
with gr.Row():
|
||||
voicecraft_model_choice = gr.Radio(label="VoiceCraft model", value="giga830M",
|
||||
choices=["giga330M", "giga830M", "giga330M_TTSEnhanced"])
|
||||
whisper_backend_choice = gr.Radio(label="Whisper backend", value="whisperX", choices=["whisper", "whisperX"])
|
||||
voicecraft_model_choice = gr.Radio(label="VoiceCraft model", value="830M_TTSEnhanced",
|
||||
choices=["330M", "830M", "330M_TTSEnhanced", "830M_TTSEnhanced"])
|
||||
whisper_backend_choice = gr.Radio(label="Whisper backend", value="whisperX", choices=["whisperX", "whisper"])
|
||||
whisper_model_choice = gr.Radio(label="Whisper model", value="base.en",
|
||||
choices=[None, "base.en", "small.en", "medium.en", "large"])
|
||||
align_model_choice = gr.Radio(label="Forced alignment model", value="whisperX", choices=[None, "whisperX"])
|
||||
align_model_choice = gr.Radio(label="Forced alignment model", value="whisperX", choices=["whisperX", None])
|
||||
|
||||
with gr.Row():
|
||||
with gr.Column(scale=2):
|
||||
input_audio = gr.Audio(value=f"{DEMO_PATH}/84_121550_000074_000000.wav", label="Input Audio", type="filepath", interactive=True)
|
||||
input_audio = gr.Audio(value=f"{DEMO_PATH}/5895_34622_000026_000002.wav", label="Input Audio", type="filepath", interactive=True)
|
||||
with gr.Group():
|
||||
original_transcript = gr.Textbox(label="Original transcript", lines=5, value=demo_original_transcript,
|
||||
info="Use whisper model to get the transcript. Fix and align it if necessary.")
|
||||
info="Use whisperx model to get the transcript. Fix and align it if necessary.")
|
||||
with gr.Accordion("Word start time", open=False):
|
||||
transcript_with_start_time = gr.Textbox(label="Start time", lines=5, interactive=False, info="Start time before each word")
|
||||
with gr.Accordion("Word end time", open=False):
|
||||
|
@ -465,20 +471,20 @@ def get_app():
|
|||
mode = gr.Radio(label="Mode", choices=["TTS", "Edit", "Long TTS"], value="TTS")
|
||||
split_text = gr.Radio(label="Split text", choices=["Newline", "Sentence"], value="Newline",
|
||||
info="Split text into parts and run TTS for each part.", visible=False)
|
||||
edit_word_mode = gr.Radio(label="Edit word mode", choices=["Replace half", "Replace all"], value="Replace half",
|
||||
edit_word_mode = gr.Radio(label="Edit word mode", choices=["Replace half", "Replace all"], value="Replace all",
|
||||
info="What to do with first and last word", visible=False)
|
||||
|
||||
with gr.Group() as tts_mode_controls:
|
||||
prompt_to_word = gr.Dropdown(label="Last word in prompt", choices=demo_words, value=demo_words[10], interactive=True)
|
||||
prompt_end_time = gr.Slider(label="Prompt end time", minimum=0, maximum=7.93, step=0.001, value=3.016)
|
||||
prompt_to_word = gr.Dropdown(label="Last word in prompt", choices=demo_words, value=demo_words[11], interactive=True)
|
||||
prompt_end_time = gr.Slider(label="Prompt end time", minimum=0, maximum=7.614, step=0.001, value=3.600)
|
||||
|
||||
with gr.Group(visible=False) as edit_mode_controls:
|
||||
with gr.Row():
|
||||
edit_from_word = gr.Dropdown(label="First word to edit", choices=demo_words, value=demo_words[2], interactive=True)
|
||||
edit_to_word = gr.Dropdown(label="Last word to edit", choices=demo_words, value=demo_words[12], interactive=True)
|
||||
edit_from_word = gr.Dropdown(label="First word to edit", choices=demo_words, value=demo_words[12], interactive=True)
|
||||
edit_to_word = gr.Dropdown(label="Last word to edit", choices=demo_words, value=demo_words[18], interactive=True)
|
||||
with gr.Row():
|
||||
edit_start_time = gr.Slider(label="Edit from time", minimum=0, maximum=7.93, step=0.001, value=0.46)
|
||||
edit_end_time = gr.Slider(label="Edit to time", minimum=0, maximum=7.93, step=0.001, value=3.808)
|
||||
edit_start_time = gr.Slider(label="Edit from time", minimum=0, maximum=7.614, step=0.001, value=4.022)
|
||||
edit_end_time = gr.Slider(label="Edit to time", minimum=0, maximum=7.614, step=0.001, value=5.768)
|
||||
|
||||
run_btn = gr.Button(value="Run")
|
||||
|
||||
|
@ -497,7 +503,7 @@ def get_app():
|
|||
with gr.Accordion("Generation Parameters - change these if you are unhappy with the generation", open=False):
|
||||
stop_repetition = gr.Radio(label="stop_repetition", choices=[-1, 1, 2, 3, 4], value=3,
|
||||
info="if there are long silence in the generated audio, reduce the stop_repetition to 2 or 1. -1 = disabled")
|
||||
sample_batch_size = gr.Number(label="speech rate", value=4, precision=0,
|
||||
sample_batch_size = gr.Number(label="speech rate", value=3, precision=0,
|
||||
info="The higher the number, the faster the output will be. "
|
||||
"Under the hood, the model will generate this many samples and choose the shortest one. "
|
||||
"For giga330M_TTSEnhanced, 1 or 2 should be fine since the model is trained to do TTS.")
|
||||
|
@ -602,6 +608,7 @@ if __name__ == "__main__":
|
|||
parser.add_argument("--models-path", default="./pretrained_models", help="Path to voicecraft models directory")
|
||||
parser.add_argument("--port", default=7860, type=int, help="App port")
|
||||
parser.add_argument("--share", action="store_true", help="Launch with public url")
|
||||
parser.add_argument("--server_name", default="127.0.0.1", type=str, help="Server name for launching the app. 127.0.0.1 for localhost; 0.0.0.0 to allow access from other machines in the local network. Might also give access to external users depends on the firewall settings.")
|
||||
|
||||
os.environ["USER"] = os.getenv("USER", "user")
|
||||
args = parser.parse_args()
|
||||
|
@ -610,4 +617,4 @@ if __name__ == "__main__":
|
|||
MODELS_PATH = args.models_path
|
||||
|
||||
app = get_app()
|
||||
app.queue().launch(share=args.share, server_port=args.port)
|
||||
app.queue().launch(share=args.share, server_name=args.server_name, server_port=args.port)
|
||||
|
|
|
@ -2,4 +2,6 @@ gradio==3.50.2
|
|||
nltk>=3.8.1
|
||||
openai-whisper>=20231117
|
||||
aeneas>=1.7.3.0
|
||||
whisperx>=3.1.1
|
||||
whisperx>=3.1.1
|
||||
huggingface_hub==0.22.2
|
||||
num2words==0.5.13
|
||||
|
|
|
@ -1,267 +1,254 @@
|
|||
{
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 5,
|
||||
"metadata": {
|
||||
"id": "SiWiiUpv5iQg"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import libs\n",
|
||||
"import torch\n",
|
||||
"import torchaudio\n",
|
||||
"import os\n",
|
||||
"import numpy as np\n",
|
||||
"import random\n",
|
||||
"os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\n",
|
||||
"os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n",
|
||||
"os.environ[\"USER\"] = \"YOUR_USERNAME\" # TODO change this to your username\n",
|
||||
"\n",
|
||||
"from data.tokenizer import (\n",
|
||||
" AudioTokenizer,\n",
|
||||
" TextTokenizer,\n",
|
||||
")\n",
|
||||
"\n",
|
||||
"from models import voicecraft\n",
|
||||
"from edit_utils import parse_edit, get_edits"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 6,
|
||||
"metadata": {
|
||||
"id": "a0pIv_pA5k0C"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# hyperparameters for inference\n",
|
||||
"left_margin = 0.08\n",
|
||||
"right_margin = 0.08\n",
|
||||
"sub_amount = 0.01\n",
|
||||
"codec_audio_sr = 16000\n",
|
||||
"codec_sr = 50\n",
|
||||
"top_k = 0\n",
|
||||
"top_p = 0.8\n",
|
||||
"temperature = 1\n",
|
||||
"kvcache = 0\n",
|
||||
"# NOTE: adjust the below three arguments if the generation is not as good\n",
|
||||
"seed = 1 # random seed magic\n",
|
||||
"silence_tokens = [1388,1898,131]\n",
|
||||
"stop_repetition = -1 # if there are long silence in the generated audio, reduce the stop_repetition to 3, 2 or even 1\n",
|
||||
"# what this will do to the model is that the model will run sample_batch_size examples of the same audio, and pick the one that's the shortest\n",
|
||||
"def seed_everything(seed):\n",
|
||||
" os.environ['PYTHONHASHSEED'] = str(seed)\n",
|
||||
" random.seed(seed)\n",
|
||||
" np.random.seed(seed)\n",
|
||||
" torch.manual_seed(seed)\n",
|
||||
" torch.cuda.manual_seed(seed)\n",
|
||||
" torch.backends.cudnn.benchmark = False\n",
|
||||
" torch.backends.cudnn.deterministic = True\n",
|
||||
"seed_everything(seed)\n",
|
||||
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
|
||||
"\n",
|
||||
"# point to the original file or record the file\n",
|
||||
"# write down the transcript for the file, or run whisper to get the transcript (and you can modify it if it's not accurate), save it as a .txt file\n",
|
||||
"orig_audio = \"./demo/84_121550_000074_000000.wav\"\n",
|
||||
"orig_transcript = \"But when I had approached so near to them The common object, which the sense deceives, Lost not by distance any of its marks,\"\n",
|
||||
"# move the audio and transcript to temp folder\n",
|
||||
"temp_folder = \"./demo/temp\"\n",
|
||||
"os.makedirs(temp_folder, exist_ok=True)\n",
|
||||
"os.system(f\"cp {orig_audio} {temp_folder}\")\n",
|
||||
"filename = os.path.splitext(orig_audio.split(\"/\")[-1])[0]\n",
|
||||
"with open(f\"{temp_folder}/{filename}.txt\", \"w\") as f:\n",
|
||||
" f.write(orig_transcript)\n",
|
||||
"# run MFA to get the alignment\n",
|
||||
"align_temp = f\"{temp_folder}/mfa_alignments\"\n",
|
||||
"os.makedirs(align_temp, exist_ok=True)\n",
|
||||
"os.system(f\"mfa align -j 1 --clean --output_format csv {temp_folder} english_us_arpa english_us_arpa {align_temp}\")\n",
|
||||
"# if it fail, it could be because the audio is too hard for the alignment model, increasing the beam size usually solves the issue\n",
|
||||
"# os.system(f\"mfa align -j 1 --clean --output_format csv {temp_folder} english_us_arpa english_us_arpa {align_temp} --beam 1000 --retry_beam 2000\")\n",
|
||||
"audio_fn = f\"{temp_folder}/{filename}.wav\"\n",
|
||||
"transcript_fn = f\"{temp_folder}/{filename}.txt\"\n",
|
||||
"align_fn = f\"{align_temp}/{filename}.csv\""
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 9,
|
||||
"metadata": {
|
||||
"id": "iIPNTtibF4OL"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"def get_mask_interval(ali_fn, word_span_ind, editType):\n",
|
||||
" with open(ali_fn, \"r\") as rf:\n",
|
||||
" data = [l.strip().split(\",\") for l in rf.readlines()]\n",
|
||||
" data = data[1:]\n",
|
||||
" tmp = word_span_ind.split(\",\")\n",
|
||||
" s, e = int(tmp[0]), int(tmp[-1])\n",
|
||||
" start = None\n",
|
||||
" for j, item in enumerate(data):\n",
|
||||
" if j == s and item[3] == \"words\":\n",
|
||||
" if editType == 'insertion':\n",
|
||||
" start = float(item[1])\n",
|
||||
" else:\n",
|
||||
" start = float(item[0])\n",
|
||||
" if j == e and item[3] == \"words\":\n",
|
||||
" if editType == 'insertion':\n",
|
||||
" end = float(item[0])\n",
|
||||
" else:\n",
|
||||
" end = float(item[1])\n",
|
||||
" assert start != None\n",
|
||||
" break\n",
|
||||
" return (start, end)\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {
|
||||
"colab": {
|
||||
"base_uri": "https://localhost:8080/",
|
||||
"height": 280
|
||||
},
|
||||
"id": "krbq1mBM6GDE",
|
||||
"outputId": "d9267aef-05b2-4276-ee8b-5687cab5c612"
|
||||
},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# propose what do you want the target modified transcript to be\n",
|
||||
"orig_transcript = \"But when I had approached so near to them which the sense deceives, Lost not by distance any of its marks,\"\n",
|
||||
"target_transcript = \"But I did approached so near to them which the sense deceives, Lost not by distance any of its marks,\"\n",
|
||||
"\n",
|
||||
"# from edit_utils import parse_edit, get_edits\n",
|
||||
"\n",
|
||||
"# run the script to turn user input to the format that the model can take\n",
|
||||
"operations, orig_span, new_span = parse_edit(orig_transcript, target_transcript)\n",
|
||||
"\n",
|
||||
"used_edits = get_edits(operations)\n",
|
||||
"print(used_edits) \n",
|
||||
"\n",
|
||||
"def process_span(span):\n",
|
||||
" if span[0] > span[1]:\n",
|
||||
" raise RuntimeError(f\"example {audio_fn} failed\")\n",
|
||||
" if span[0] == span[1]:\n",
|
||||
" return [span[0]]\n",
|
||||
" return span\n",
|
||||
"\n",
|
||||
"print(\"orig_span: \", orig_span)\n",
|
||||
"print(\"new_span: \", new_span)\n",
|
||||
"orig_span_save = [process_span(span) for span in orig_span]\n",
|
||||
"new_span_save = [process_span(span) for span in new_span]\n",
|
||||
"\n",
|
||||
"orig_span_saves = [\",\".join([str(item) for item in span]) for span in orig_span_save]\n",
|
||||
"new_span_saves = [\",\".join([str(item) for item in span]) for span in new_span_save]\n",
|
||||
"\n",
|
||||
"starting_intervals = []\n",
|
||||
"ending_intervals = []\n",
|
||||
"for i, orig_span_save in enumerate(orig_span_saves):\n",
|
||||
" start, end = get_mask_interval(align_fn, orig_span_save, used_edits[i])\n",
|
||||
" starting_intervals.append(start)\n",
|
||||
" ending_intervals.append(end)\n",
|
||||
"\n",
|
||||
"info = torchaudio.info(audio_fn)\n",
|
||||
"audio_dur = info.num_frames / info.sample_rate\n",
|
||||
"\n",
|
||||
"def resolve_overlap(starting_intervals, ending_intervals, audio_dur, codec_sr, left_margin, right_margin, sub_amount):\n",
|
||||
" while True:\n",
|
||||
" morphed_span = [(max(start - left_margin, 1/codec_sr), min(end + right_margin, audio_dur))\n",
|
||||
" for start, end in zip(starting_intervals, ending_intervals)] # in seconds\n",
|
||||
" mask_interval = [[round(span[0]*codec_sr), round(span[1]*codec_sr)] for span in morphed_span]\n",
|
||||
" # Check for overlap\n",
|
||||
" overlapping = any(a[1] >= b[0] for a, b in zip(mask_interval, mask_interval[1:]))\n",
|
||||
" if not overlapping:\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" # Reduce margins\n",
|
||||
" left_margin -= sub_amount\n",
|
||||
" right_margin -= sub_amount\n",
|
||||
" \n",
|
||||
" return mask_interval\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# span in codec frames\n",
|
||||
"mask_interval = resolve_overlap(starting_intervals, ending_intervals, audio_dur, codec_sr, left_margin, right_margin, sub_amount)\n",
|
||||
"mask_interval = torch.LongTensor(mask_interval) # [M,2], M==1 for now\n",
|
||||
"\n",
|
||||
"# load model, tokenizer, and other necessary files\n",
|
||||
"voicecraft_name=\"giga330M.pth\" # or giga830M.pth, or the newer models at https://huggingface.co/pyp1/VoiceCraft/tree/main\n",
|
||||
"ckpt_fn =f\"./pretrained_models/{voicecraft_name}\"\n",
|
||||
"encodec_fn = \"./pretrained_models/encodec_4cb2048_giga.th\"\n",
|
||||
"if not os.path.exists(ckpt_fn):\n",
|
||||
" os.system(f\"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/{voicecraft_name}\\?download\\=true\")\n",
|
||||
" os.system(f\"mv {voicecraft_name}\\?download\\=true ./pretrained_models/{voicecraft_name}\")\n",
|
||||
"if not os.path.exists(encodec_fn):\n",
|
||||
" os.system(f\"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/encodec_4cb2048_giga.th\")\n",
|
||||
" os.system(f\"mv encodec_4cb2048_giga.th ./pretrained_models/encodec_4cb2048_giga.th\")\n",
|
||||
"ckpt = torch.load(ckpt_fn, map_location=\"cpu\")\n",
|
||||
"model = voicecraft.VoiceCraft(ckpt[\"config\"])\n",
|
||||
"model.load_state_dict(ckpt[\"model\"])\n",
|
||||
"model.to(device)\n",
|
||||
"model.eval()\n",
|
||||
"\n",
|
||||
"phn2num = ckpt['phn2num']\n",
|
||||
"\n",
|
||||
"text_tokenizer = TextTokenizer(backend=\"espeak\")\n",
|
||||
"audio_tokenizer = AudioTokenizer(signature=encodec_fn) # will also put the neural codec model on gpu\n",
|
||||
"\n",
|
||||
"# run the model to get the output\n",
|
||||
"from inference_speech_editing_scale import inference_one_sample\n",
|
||||
"\n",
|
||||
"decode_config = {'top_k': top_k, 'top_p': top_p, 'temperature': temperature, 'stop_repetition': stop_repetition, 'kvcache': kvcache, \"codec_audio_sr\": codec_audio_sr, \"codec_sr\": codec_sr, \"silence_tokens\": silence_tokens}\n",
|
||||
"orig_audio, new_audio = inference_one_sample(model, ckpt[\"config\"], phn2num, text_tokenizer, audio_tokenizer, audio_fn, target_transcript, mask_interval, device, decode_config)\n",
|
||||
"\n",
|
||||
"# save segments for comparison\n",
|
||||
"orig_audio, new_audio = orig_audio[0].cpu(), new_audio[0].cpu()\n",
|
||||
"# logging.info(f\"length of the resynthesize orig audio: {orig_audio.shape}\")\n",
|
||||
"\n",
|
||||
"# display the audio\n",
|
||||
"from IPython.display import Audio\n",
|
||||
"print(\"original:\")\n",
|
||||
"display(Audio(orig_audio, rate=codec_audio_sr))\n",
|
||||
"\n",
|
||||
"print(\"edited:\")\n",
|
||||
"display(Audio(new_audio, rate=codec_audio_sr))\n",
|
||||
"\n",
|
||||
"# # save the audio\n",
|
||||
"# # output_dir\n",
|
||||
"# output_dir = \"./demo/generated_se\"\n",
|
||||
"# os.makedirs(output_dir, exist_ok=True)\n",
|
||||
"\n",
|
||||
"# save_fn_new = f\"{output_dir}/{os.path.basename(audio_fn)[:-4]}_new_seed{seed}.wav\"\n",
|
||||
"\n",
|
||||
"# torchaudio.save(save_fn_new, new_audio, codec_audio_sr)\n",
|
||||
"\n",
|
||||
"# save_fn_orig = f\"{output_dir}/{os.path.basename(audio_fn)[:-4]}_orig.wav\"\n",
|
||||
"# if not os.path.isfile(save_fn_orig):\n",
|
||||
"# orig_audio, orig_sr = torchaudio.load(audio_fn)\n",
|
||||
"# if orig_sr != codec_audio_sr:\n",
|
||||
"# orig_audio = torchaudio.transforms.Resample(orig_sr, codec_audio_sr)(orig_audio)\n",
|
||||
"# torchaudio.save(save_fn_orig, orig_audio, codec_audio_sr)\n",
|
||||
"\n",
|
||||
"# # if you get error importing T5 in transformers\n",
|
||||
"# # try\n",
|
||||
"# # pip uninstall Pillow\n",
|
||||
"# # pip install Pillow\n",
|
||||
"# # you are likely to get warning looks like WARNING:phonemizer:words count mismatch on 300.0% of the lines (3/1), this can be safely ignored"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"accelerator": "GPU",
|
||||
"colab": {
|
||||
"gpuType": "T4",
|
||||
"provenance": []
|
||||
},
|
||||
"kernelspec": {
|
||||
"display_name": "Python 3",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"name": "python"
|
||||
}
|
||||
"cells": [
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 24,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# import libs\n",
|
||||
"import torch\n",
|
||||
"import torchaudio\n",
|
||||
"import os\n",
|
||||
"import numpy as np\n",
|
||||
"import random\n",
|
||||
"os.environ[\"CUDA_DEVICE_ORDER\"]=\"PCI_BUS_ID\"\n",
|
||||
"os.environ[\"CUDA_VISIBLE_DEVICES\"]=\"0\"\n",
|
||||
"os.environ[\"USER\"] = \"YOUR_USERNAME\" # TODO change this to your username\n",
|
||||
"\n",
|
||||
"from data.tokenizer import (\n",
|
||||
" AudioTokenizer,\n",
|
||||
" TextTokenizer,\n",
|
||||
")\n",
|
||||
"from inference_speech_editing_scale import get_mask_interval, inference_one_sample\n",
|
||||
"from edit_utils import get_edits, parse_edit\n",
|
||||
"\n",
|
||||
"from argparse import Namespace\n",
|
||||
"from models import voicecraft"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": 3,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# install MFA models and dictionaries if you haven't done so already\n",
|
||||
"# !source ~/.bashrc && \\\n",
|
||||
"# conda activate voicecraft && \\\n",
|
||||
"# mfa model download dictionary english_us_arpa && \\\n",
|
||||
"# mfa model download acoustic english_us_arpa"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# hyperparameters for inference\n",
|
||||
"left_margin = 0.08\n",
|
||||
"right_margin = 0.08\n",
|
||||
"sub_amount = 0.01\n",
|
||||
"codec_audio_sr = 16000\n",
|
||||
"codec_sr = 50\n",
|
||||
"top_k = 0\n",
|
||||
"top_p = 0.8\n",
|
||||
"temperature = 1\n",
|
||||
"kvcache = 0\n",
|
||||
"# adjust the below three arguments if the generation is not as good\n",
|
||||
"seed = 1 # random seed magic\n",
|
||||
"silence_tokens = [1388,1898,131] # if there are long silence in the generated audio, reduce the stop_repetition to 3, 2 or even 1\n",
|
||||
"stop_repetition = -1 # -1 means do not adjust prob of silence tokens. if there are long silence or unnaturally strecthed words, increase sample_batch_size to 2, 3 or even 4\n",
|
||||
"# what this will do to the model is that the model will run sample_batch_size examples of the same audio, and pick the one that's the shortest\n",
|
||||
"def seed_everything(seed):\n",
|
||||
" os.environ['PYTHONHASHSEED'] = str(seed)\n",
|
||||
" random.seed(seed)\n",
|
||||
" np.random.seed(seed)\n",
|
||||
" torch.manual_seed(seed)\n",
|
||||
" torch.cuda.manual_seed(seed)\n",
|
||||
" torch.backends.cudnn.benchmark = False\n",
|
||||
" torch.backends.cudnn.deterministic = True\n",
|
||||
"seed_everything(seed)\n",
|
||||
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
|
||||
"# load model, tokenizer, and other necessary files\n",
|
||||
"voicecraft_name=\"giga330M.pth\" # or gigaHalfLibri330M_TTSEnhanced_max16s.pth, giga830M.pth\n",
|
||||
"\n",
|
||||
"# the new way of loading the model, with huggingface, recommended\n",
|
||||
"model = voicecraft.VoiceCraft.from_pretrained(f\"pyp1/VoiceCraft_{voicecraft_name.replace('.pth', '')}\")\n",
|
||||
"phn2num = model.args.phn2num\n",
|
||||
"config = vars(model.args)\n",
|
||||
"model.to(device)\n",
|
||||
"\n",
|
||||
"# # the old way of loading the model\n",
|
||||
"# from models import voicecraft\n",
|
||||
"# filepath = hf_hub_download(repo_id=\"pyp1/VoiceCraft\", filename=voicecraft_name, repo_type=\"model\")\n",
|
||||
"# ckpt = torch.load(filepath, map_location=\"cpu\")\n",
|
||||
"# model = voicecraft.VoiceCraft(ckpt[\"config\"])\n",
|
||||
"# model.load_state_dict(ckpt[\"model\"])\n",
|
||||
"# config = vars(model.args)\n",
|
||||
"# phn2num = ckpt[\"phn2num\"]\n",
|
||||
"# model.to(device)\n",
|
||||
"# model.eval()\n",
|
||||
"\n",
|
||||
"encodec_fn = \"./pretrained_models/encodec_4cb2048_giga.th\"\n",
|
||||
"if not os.path.exists(encodec_fn):\n",
|
||||
" os.system(f\"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/encodec_4cb2048_giga.th\")\n",
|
||||
" os.system(f\"mv encodec_4cb2048_giga.th ./pretrained_models/encodec_4cb2048_giga.th\")\n",
|
||||
"audio_tokenizer = AudioTokenizer(signature=encodec_fn) # will also put the neural codec model on gpu\n",
|
||||
"\n",
|
||||
"text_tokenizer = TextTokenizer(backend=\"espeak\")\n",
|
||||
"\n",
|
||||
"# point to the original file or record the file\n",
|
||||
"# write down the transcript for the file, or run whisper to get the transcript (and you can modify it if it's not accurate), save it as a .txt file\n",
|
||||
"orig_audio = \"./demo/84_121550_000074_000000.wav\"\n",
|
||||
"orig_transcript = \"But when I had approached so near to them The common object, which the sense deceives, Lost not by distance any of its marks,\"\n",
|
||||
"# move the audio and transcript to temp folder\n",
|
||||
"temp_folder = \"./demo/temp\"\n",
|
||||
"os.makedirs(temp_folder, exist_ok=True)\n",
|
||||
"os.system(f\"cp {orig_audio} {temp_folder}\")\n",
|
||||
"filename = os.path.splitext(orig_audio.split(\"/\")[-1])[0]\n",
|
||||
"with open(f\"{temp_folder}/{filename}.txt\", \"w\") as f:\n",
|
||||
" f.write(orig_transcript)\n",
|
||||
"# run MFA to get the alignment\n",
|
||||
"align_temp = f\"{temp_folder}/mfa_alignments\"\n",
|
||||
"os.makedirs(align_temp, exist_ok=True)\n",
|
||||
"os.system(f\"mfa align -j 1 --output_format csv {temp_folder} english_us_arpa english_us_arpa {align_temp}\")\n",
|
||||
"# if it fail, it could be because the audio is too hard for the alignment model, increasing the beam size usually solves the issue\n",
|
||||
"# os.system(f\"mfa align -j 1 --output_format csv {temp_folder} english_us_arpa english_us_arpa {align_temp} --beam 1000 --retry_beam 2000\")\n",
|
||||
"audio_fn = f\"{temp_folder}/{filename}.wav\"\n",
|
||||
"transcript_fn = f\"{temp_folder}/{filename}.txt\"\n",
|
||||
"align_fn = f\"{align_temp}/{filename}.csv\"\n",
|
||||
"\n"
|
||||
]
|
||||
},
|
||||
{
|
||||
"cell_type": "code",
|
||||
"execution_count": null,
|
||||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# propose what do you want the target modified transcript to be\n",
|
||||
"target_transcript = \"But when I had approached so near, that the sense deceives, Lost not by farness any of its marks,\"\n",
|
||||
"print(\"orig: \", orig_transcript)\n",
|
||||
"print(\"trgt: \", target_transcript)\n",
|
||||
"\n",
|
||||
"# run the script to turn user input to the format that the model can take\n",
|
||||
"operations, orig_span, new_span = parse_edit(orig_transcript, target_transcript)\n",
|
||||
"if operations[-1] == 'i':\n",
|
||||
" raise RuntimeError(\"The last operation should not be insertion. Please use text to speech instead\")\n",
|
||||
"print(operations)\n",
|
||||
"used_edits = get_edits(operations)\n",
|
||||
"print(used_edits)\n",
|
||||
"\n",
|
||||
"def process_span(span):\n",
|
||||
" if span[0] > span[1]:\n",
|
||||
" raise RuntimeError(f\"example {audio_fn} failed\")\n",
|
||||
" if span[0] == span[1]:\n",
|
||||
" return [span[0]]\n",
|
||||
" return span\n",
|
||||
"\n",
|
||||
"print(\"orig_span: \", orig_span)\n",
|
||||
"print(\"new_span: \", new_span)\n",
|
||||
"orig_span_save = [process_span(span) for span in orig_span]\n",
|
||||
"new_span_save = [process_span(span) for span in new_span]\n",
|
||||
"\n",
|
||||
"orig_span_saves = [\",\".join([str(item) for item in span]) for span in orig_span_save]\n",
|
||||
"new_span_saves = [\",\".join([str(item) for item in span]) for span in new_span_save]\n",
|
||||
"\n",
|
||||
"starting_intervals = []\n",
|
||||
"ending_intervals = []\n",
|
||||
"for i, orig_span_save in enumerate(orig_span_saves):\n",
|
||||
" start, end = get_mask_interval(align_fn, orig_span_save, used_edits[i])\n",
|
||||
" starting_intervals.append(start)\n",
|
||||
" ending_intervals.append(end)\n",
|
||||
"\n",
|
||||
"print(\"intervals: \", starting_intervals, ending_intervals)\n",
|
||||
"\n",
|
||||
"info = torchaudio.info(audio_fn)\n",
|
||||
"audio_dur = info.num_frames / info.sample_rate\n",
|
||||
"\n",
|
||||
"def resolve_overlap(starting_intervals, ending_intervals, audio_dur, codec_sr, left_margin, right_margin, sub_amount):\n",
|
||||
" while True:\n",
|
||||
" morphed_span = [(max(start - left_margin, 1/codec_sr), min(end + right_margin, audio_dur))\n",
|
||||
" for start, end in zip(starting_intervals, ending_intervals)] # in seconds\n",
|
||||
" mask_interval = [[round(span[0]*codec_sr), round(span[1]*codec_sr)] for span in morphed_span]\n",
|
||||
" # Check for overlap\n",
|
||||
" overlapping = any(a[1] >= b[0] for a, b in zip(mask_interval, mask_interval[1:]))\n",
|
||||
" if not overlapping:\n",
|
||||
" break\n",
|
||||
" \n",
|
||||
" # Reduce margins\n",
|
||||
" left_margin -= sub_amount\n",
|
||||
" right_margin -= sub_amount\n",
|
||||
" \n",
|
||||
" return mask_interval\n",
|
||||
"\n",
|
||||
"\n",
|
||||
"# span in codec frames\n",
|
||||
"mask_interval = resolve_overlap(starting_intervals, ending_intervals, audio_dur, codec_sr, left_margin, right_margin, sub_amount)\n",
|
||||
"mask_interval = torch.LongTensor(mask_interval) # [M,2], M==1 for now\n",
|
||||
"# run the model to get the output\n",
|
||||
"decode_config = {'top_k': top_k, 'top_p': top_p, 'temperature': temperature, 'stop_repetition': stop_repetition, 'kvcache': kvcache, \"codec_audio_sr\": codec_audio_sr, \"codec_sr\": codec_sr, \"silence_tokens\": silence_tokens}\n",
|
||||
"orig_audio, new_audio = inference_one_sample(model, Namespace(**config), phn2num, text_tokenizer, audio_tokenizer, audio_fn, target_transcript, mask_interval, device, decode_config)\n",
|
||||
"\n",
|
||||
"# save segments for comparison\n",
|
||||
"orig_audio, new_audio = orig_audio[0].cpu(), new_audio[0].cpu()\n",
|
||||
"# logging.info(f\"length of the resynthesize orig audio: {orig_audio.shape}\")\n",
|
||||
"\n",
|
||||
"# display the audio\n",
|
||||
"from IPython.display import Audio\n",
|
||||
"print(\"original:\")\n",
|
||||
"display(Audio(orig_audio, rate=codec_audio_sr))\n",
|
||||
"\n",
|
||||
"print(\"edited:\")\n",
|
||||
"display(Audio(new_audio, rate=codec_audio_sr))\n",
|
||||
"\n",
|
||||
"# # save the audio\n",
|
||||
"# # output_dir\n",
|
||||
"# output_dir = \"./demo/generated_se\"\n",
|
||||
"# os.makedirs(output_dir, exist_ok=True)\n",
|
||||
"\n",
|
||||
"# save_fn_new = f\"{output_dir}/{os.path.basename(audio_fn)[:-4]}_new_seed{seed}.wav\"\n",
|
||||
"\n",
|
||||
"# torchaudio.save(save_fn_new, new_audio, codec_audio_sr)\n",
|
||||
"\n",
|
||||
"# save_fn_orig = f\"{output_dir}/{os.path.basename(audio_fn)[:-4]}_orig.wav\"\n",
|
||||
"# if not os.path.isfile(save_fn_orig):\n",
|
||||
"# orig_audio, orig_sr = torchaudio.load(audio_fn)\n",
|
||||
"# if orig_sr != codec_audio_sr:\n",
|
||||
"# orig_audio = torchaudio.transforms.Resample(orig_sr, codec_audio_sr)(orig_audio)\n",
|
||||
"# torchaudio.save(save_fn_orig, orig_audio, codec_audio_sr)\n",
|
||||
"\n",
|
||||
"# # if you get error importing T5 in transformers\n",
|
||||
"# # try\n",
|
||||
"# # pip uninstall Pillow\n",
|
||||
"# # pip install Pillow\n",
|
||||
"# # you are likely to get warning looks like WARNING:phonemizer:words count mismatch on 300.0% of the lines (3/1), this can be safely ignored"
|
||||
]
|
||||
}
|
||||
],
|
||||
"metadata": {
|
||||
"kernelspec": {
|
||||
"display_name": "voicecraft",
|
||||
"language": "python",
|
||||
"name": "python3"
|
||||
},
|
||||
"language_info": {
|
||||
"codemirror_mode": {
|
||||
"name": "ipython",
|
||||
"version": 3
|
||||
},
|
||||
"file_extension": ".py",
|
||||
"mimetype": "text/x-python",
|
||||
"name": "python",
|
||||
"nbconvert_exporter": "python",
|
||||
"pygments_lexer": "ipython3",
|
||||
"version": "3.9.16"
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 0
|
||||
},
|
||||
"nbformat": 4,
|
||||
"nbformat_minor": 2
|
||||
}
|
||||
|
|
|
@ -71,7 +71,7 @@
|
|||
"# load model, encodec, and phn2num\n",
|
||||
"# # load model, tokenizer, and other necessary files\n",
|
||||
"device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
|
||||
"voicecraft_name=\"giga330M.pth\" # or gigaHalfLibri330M_TTSEnhanced_max16s.pth, giga830M.pth\n",
|
||||
"voicecraft_name=\"830M_TTSEnhanced.pth\" # or giga330M.pth, 330M_TTSEnhanced.pth, giga830M.pth\n",
|
||||
"\n",
|
||||
"# the new way of loading the model, with huggingface, recommended\n",
|
||||
"from models import voicecraft\n",
|
||||
|
@ -111,8 +111,8 @@
|
|||
"# Prepare your audio\n",
|
||||
"# point to the original audio whose speech you want to clone\n",
|
||||
"# write down the transcript for the file, or run whisper to get the transcript (and you can modify it if it's not accurate), save it as a .txt file\n",
|
||||
"orig_audio = \"./demo/84_121550_000074_000000.wav\"\n",
|
||||
"orig_transcript = \"But when I had approached so near to them The common object, which the sense deceives, Lost not by distance any of its marks,\"\n",
|
||||
"orig_audio = \"./demo/5895_34622_000026_000002.wav\"\n",
|
||||
"orig_transcript = \"Gwynplaine had, besides, for his work and for his feats of strength, round his neck and over his shoulders, an esclavine of leather.\"\n",
|
||||
"\n",
|
||||
"# move the audio and transcript to temp folder\n",
|
||||
"temp_folder = \"./demo/temp\"\n",
|
||||
|
@ -143,8 +143,8 @@
|
|||
"outputs": [],
|
||||
"source": [
|
||||
"# take a look at demo/temp/mfa_alignment, decide which part of the audio to use as prompt\n",
|
||||
"cut_off_sec = 3.01 # NOTE: according to forced-alignment file demo/temp/mfa_alignments/84_121550_000074_000000.csv, the word \"common\" stop as 3.01 sec, this should be different for different audio\n",
|
||||
"target_transcript = \"But when I had approached so near to them The common I cannot believe that the same model can also do text to speech synthesis as well!\"\n",
|
||||
"cut_off_sec = 3.6 # NOTE: according to forced-alignment file demo/temp/mfa_alignments/5895_34622_000026_000002.wav, the word \"strength\" stop as 3.561 sec, so we use first 3.6 sec as the prompt. this should be different for different audio\n",
|
||||
"target_transcript = \"Gwynplaine had, besides, for his work and for his feats of strength, I cannot believe that the same model can also do text to speech synthesis too!\"\n",
|
||||
"# NOTE: 3 sec of reference is generally enough for high quality voice cloning, but longer is generally better, try e.g. 3~6 sec.\n",
|
||||
"audio_fn = f\"{temp_folder}/{filename}.wav\"\n",
|
||||
"info = torchaudio.info(audio_fn)\n",
|
||||
|
@ -165,7 +165,7 @@
|
|||
"\n",
|
||||
"# NOTE adjust the below three arguments if the generation is not as good\n",
|
||||
"stop_repetition = 3 # NOTE if the model generate long silence, reduce the stop_repetition to 3, 2 or even 1\n",
|
||||
"sample_batch_size = 5 # NOTE: if the if there are long silence or unnaturally strecthed words, increase sample_batch_size to 5 or higher. What this will do to the model is that the model will run sample_batch_size examples of the same audio, and pick the one that's the shortest. So if the speech rate of the generated is too fast change it to a smaller number.\n",
|
||||
"sample_batch_size = 3 # NOTE: if the if there are long silence or unnaturally strecthed words, increase sample_batch_size to 4 or higher. What this will do to the model is that the model will run sample_batch_size examples of the same audio, and pick the one that's the shortest. So if the speech rate of the generated is too fast change it to a smaller number.\n",
|
||||
"seed = 1 # change seed if you are still unhappy with the result\n",
|
||||
"\n",
|
||||
"def seed_everything(seed):\n",
|
||||
|
|
|
@ -0,0 +1,389 @@
|
|||
# Prediction interface for Cog ⚙️
|
||||
# https://github.com/replicate/cog/blob/main/docs/python.md
|
||||
|
||||
import os
|
||||
import time
|
||||
import random
|
||||
import getpass
|
||||
import shutil
|
||||
import subprocess
|
||||
import torch
|
||||
import numpy as np
|
||||
import torchaudio
|
||||
from cog import BasePredictor, Input, Path, BaseModel
|
||||
|
||||
os.environ["USER"] = getpass.getuser()
|
||||
|
||||
from data.tokenizer import (
|
||||
AudioTokenizer,
|
||||
TextTokenizer,
|
||||
)
|
||||
from models import voicecraft
|
||||
from inference_tts_scale import inference_one_sample
|
||||
from edit_utils import get_span
|
||||
from inference_speech_editing_scale import (
|
||||
inference_one_sample as inference_one_sample_editing,
|
||||
)
|
||||
|
||||
|
||||
MODEL_URL = "https://weights.replicate.delivery/default/pyp1/VoiceCraft-models.tar" # all the models are cached and uploaded to replicate.delivery for faster booting
|
||||
MODEL_CACHE = "model_cache"
|
||||
|
||||
|
||||
class ModelOutput(BaseModel):
|
||||
whisper_transcript_orig_audio: str
|
||||
generated_audio: Path
|
||||
|
||||
|
||||
class WhisperxAlignModel:
|
||||
def __init__(self):
|
||||
from whisperx import load_align_model
|
||||
|
||||
self.model, self.metadata = load_align_model(
|
||||
language_code="en", device="cuda:0"
|
||||
)
|
||||
|
||||
def align(self, segments, audio_path):
|
||||
from whisperx import align, load_audio
|
||||
|
||||
audio = load_audio(audio_path)
|
||||
return align(
|
||||
segments,
|
||||
self.model,
|
||||
self.metadata,
|
||||
audio,
|
||||
device="cuda:0",
|
||||
return_char_alignments=False,
|
||||
)["segments"]
|
||||
|
||||
|
||||
class WhisperxModel:
|
||||
def __init__(self, model_name, align_model: WhisperxAlignModel, device="cuda"):
|
||||
from whisperx import load_model
|
||||
|
||||
# the model weights are cached from Systran/faster-whisper-base.en etc
|
||||
self.model = load_model(
|
||||
model_name,
|
||||
device,
|
||||
asr_options={
|
||||
"suppress_numerals": True,
|
||||
"max_new_tokens": None,
|
||||
"clip_timestamps": None,
|
||||
"hallucination_silence_threshold": None,
|
||||
},
|
||||
)
|
||||
self.align_model = align_model
|
||||
|
||||
def transcribe(self, audio_path):
|
||||
segments = self.model.transcribe(audio_path, language="en", batch_size=8)[
|
||||
"segments"
|
||||
]
|
||||
return self.align_model.align(segments, audio_path)
|
||||
|
||||
|
||||
def download_weights(url, dest):
|
||||
start = time.time()
|
||||
print("downloading url: ", url)
|
||||
print("downloading to: ", dest)
|
||||
subprocess.check_call(["pget", "-x", url, dest], close_fds=False)
|
||||
print("downloading took: ", time.time() - start)
|
||||
|
||||
|
||||
class Predictor(BasePredictor):
|
||||
def setup(self):
|
||||
"""Load the model into memory to make running multiple predictions efficient"""
|
||||
self.device = "cuda"
|
||||
|
||||
if not os.path.exists(MODEL_CACHE):
|
||||
download_weights(MODEL_URL, MODEL_CACHE)
|
||||
|
||||
encodec_fn = f"{MODEL_CACHE}/encodec_4cb2048_giga.th"
|
||||
self.models, self.ckpt, self.phn2num = {}, {}, {}
|
||||
for voicecraft_name in [
|
||||
"giga830M.pth",
|
||||
"giga330M.pth",
|
||||
"gigaHalfLibri330M_TTSEnhanced_max16s.pth",
|
||||
]:
|
||||
ckpt_fn = f"{MODEL_CACHE}/{voicecraft_name}"
|
||||
|
||||
self.ckpt[voicecraft_name] = torch.load(ckpt_fn, map_location="cpu")
|
||||
self.models[voicecraft_name] = voicecraft.VoiceCraft(
|
||||
self.ckpt[voicecraft_name]["config"]
|
||||
)
|
||||
self.models[voicecraft_name].load_state_dict(
|
||||
self.ckpt[voicecraft_name]["model"]
|
||||
)
|
||||
self.models[voicecraft_name].to(self.device)
|
||||
self.models[voicecraft_name].eval()
|
||||
|
||||
self.phn2num[voicecraft_name] = self.ckpt[voicecraft_name]["phn2num"]
|
||||
|
||||
self.text_tokenizer = TextTokenizer(backend="espeak")
|
||||
self.audio_tokenizer = AudioTokenizer(signature=encodec_fn, device=self.device)
|
||||
|
||||
align_model = WhisperxAlignModel()
|
||||
self.transcribe_models = {
|
||||
k: WhisperxModel(f"{MODEL_CACHE}/whisperx_{k.split('.')[0]}", align_model)
|
||||
for k in ["base.en", "small.en", "medium.en"]
|
||||
}
|
||||
|
||||
def predict(
|
||||
self,
|
||||
task: str = Input(
|
||||
description="Choose a task",
|
||||
choices=[
|
||||
"speech_editing-substitution",
|
||||
"speech_editing-insertion",
|
||||
"speech_editing-deletion",
|
||||
"zero-shot text-to-speech",
|
||||
],
|
||||
default="zero-shot text-to-speech",
|
||||
),
|
||||
voicecraft_model: str = Input(
|
||||
description="Choose a model",
|
||||
choices=["giga830M.pth", "giga330M.pth", "giga330M_TTSEnhanced.pth"],
|
||||
default="giga330M_TTSEnhanced.pth",
|
||||
),
|
||||
orig_audio: Path = Input(description="Original audio file"),
|
||||
orig_transcript: str = Input(
|
||||
description="Optionally provide the transcript of the input audio. Leave it blank to use the WhisperX model below to generate the transcript. Inaccurate transcription may lead to error TTS or speech editing",
|
||||
default="",
|
||||
),
|
||||
whisperx_model: str = Input(
|
||||
description="If orig_transcript is not provided above, choose a WhisperX model for generating the transcript. Inaccurate transcription may lead to error TTS or speech editing. You can modify the generated transcript and provide it directly to orig_transcript above",
|
||||
choices=[
|
||||
"base.en",
|
||||
"small.en",
|
||||
"medium.en",
|
||||
],
|
||||
default="base.en",
|
||||
),
|
||||
target_transcript: str = Input(
|
||||
description="Transcript of the target audio file",
|
||||
),
|
||||
cut_off_sec: float = Input(
|
||||
description="Only used for for zero-shot text-to-speech task. The first seconds of the original audio that are used for zero-shot text-to-speech. 3 sec of reference is generally enough for high quality voice cloning, but longer is generally better, try e.g. 3~6 sec",
|
||||
default=3.01,
|
||||
),
|
||||
kvcache: int = Input(
|
||||
description="Set to 0 to use less VRAM, but with slower inference",
|
||||
choices=[0, 1],
|
||||
default=1,
|
||||
),
|
||||
left_margin: float = Input(
|
||||
description="Margin to the left of the editing segment",
|
||||
default=0.08,
|
||||
),
|
||||
right_margin: float = Input(
|
||||
description="Margin to the right of the editing segment",
|
||||
default=0.08,
|
||||
),
|
||||
temperature: float = Input(
|
||||
description="Adjusts randomness of outputs, greater than 1 is random and 0 is deterministic. Do not recommend to change",
|
||||
default=1,
|
||||
),
|
||||
top_p: float = Input(
|
||||
description="Default value for TTS is 0.9, and 0.8 for speech editing",
|
||||
default=0.9,
|
||||
),
|
||||
stop_repetition: int = Input(
|
||||
default=3,
|
||||
description="Default value for TTS is 3, and -1 for speech editing. -1 means do not adjust prob of silence tokens. if there are long silence or unnaturally stretched words, increase sample_batch_size to 2, 3 or even 4",
|
||||
),
|
||||
sample_batch_size: int = Input(
|
||||
description="Default value for TTS is 4, and 1 for speech editing. The higher the number, the faster the output will be. Under the hood, the model will generate this many samples and choose the shortest one",
|
||||
default=4,
|
||||
),
|
||||
seed: int = Input(
|
||||
description="Random seed. Leave blank to randomize the seed", default=None
|
||||
),
|
||||
) -> ModelOutput:
|
||||
"""Run a single prediction on the model"""
|
||||
|
||||
if seed is None:
|
||||
seed = int.from_bytes(os.urandom(2), "big")
|
||||
print(f"Using seed: {seed}")
|
||||
|
||||
seed_everything(seed)
|
||||
|
||||
segments = self.transcribe_models[whisperx_model].transcribe(
|
||||
str(orig_audio)
|
||||
)
|
||||
|
||||
state = get_transcribe_state(segments)
|
||||
|
||||
whisper_transcript = state["transcript"].strip()
|
||||
|
||||
if len(orig_transcript.strip()) == 0:
|
||||
orig_transcript = whisper_transcript
|
||||
|
||||
print(f"The transcript from the Whisper model: {whisper_transcript}")
|
||||
|
||||
temp_folder = "exp_dir"
|
||||
if os.path.exists(temp_folder):
|
||||
shutil.rmtree(temp_folder)
|
||||
|
||||
os.makedirs(temp_folder)
|
||||
|
||||
filename = "orig_audio"
|
||||
audio_fn = str(orig_audio)
|
||||
|
||||
info = torchaudio.info(audio_fn)
|
||||
audio_dur = info.num_frames / info.sample_rate
|
||||
|
||||
# hyperparameters for inference
|
||||
codec_audio_sr = 16000
|
||||
codec_sr = 50
|
||||
top_k = 0
|
||||
silence_tokens = [1388, 1898, 131]
|
||||
|
||||
if voicecraft_model == "giga330M_TTSEnhanced.pth":
|
||||
voicecraft_model = "gigaHalfLibri330M_TTSEnhanced_max16s.pth"
|
||||
|
||||
if task == "zero-shot text-to-speech":
|
||||
assert (
|
||||
cut_off_sec < audio_dur
|
||||
), f"cut_off_sec {cut_off_sec} is larger than the audio duration {audio_dur}"
|
||||
prompt_end_frame = int(cut_off_sec * info.sample_rate)
|
||||
|
||||
idx = find_closest_cut_off_word(state["word_bounds"], cut_off_sec)
|
||||
orig_transcript_until_cutoff_time = " ".join(
|
||||
[word_bound["word"] for word_bound in state["word_bounds"][: idx + 1]]
|
||||
)
|
||||
else:
|
||||
edit_type = task.split("-")[-1]
|
||||
orig_span, new_span = get_span(
|
||||
orig_transcript, target_transcript, edit_type
|
||||
)
|
||||
if orig_span[0] > orig_span[1]:
|
||||
RuntimeError(f"example {audio_fn} failed")
|
||||
if orig_span[0] == orig_span[1]:
|
||||
orig_span_save = [orig_span[0]]
|
||||
else:
|
||||
orig_span_save = orig_span
|
||||
if new_span[0] == new_span[1]:
|
||||
new_span_save = [new_span[0]]
|
||||
else:
|
||||
new_span_save = new_span
|
||||
orig_span_save = ",".join([str(item) for item in orig_span_save])
|
||||
new_span_save = ",".join([str(item) for item in new_span_save])
|
||||
|
||||
start, end = get_mask_interval_from_word_bounds(
|
||||
state["word_bounds"], orig_span_save, edit_type
|
||||
)
|
||||
|
||||
# span in codec frames
|
||||
morphed_span = (
|
||||
max(start - left_margin, 1 / codec_sr),
|
||||
min(end + right_margin, audio_dur),
|
||||
) # in seconds
|
||||
mask_interval = [
|
||||
[round(morphed_span[0] * codec_sr), round(morphed_span[1] * codec_sr)]
|
||||
]
|
||||
mask_interval = torch.LongTensor(mask_interval) # [M,2], M==1 for now
|
||||
|
||||
decode_config = {
|
||||
"top_k": top_k,
|
||||
"top_p": top_p,
|
||||
"temperature": temperature,
|
||||
"stop_repetition": stop_repetition,
|
||||
"kvcache": kvcache,
|
||||
"codec_audio_sr": codec_audio_sr,
|
||||
"codec_sr": codec_sr,
|
||||
"silence_tokens": silence_tokens,
|
||||
}
|
||||
|
||||
if task == "zero-shot text-to-speech":
|
||||
decode_config["sample_batch_size"] = sample_batch_size
|
||||
_, gen_audio = inference_one_sample(
|
||||
self.models[voicecraft_model],
|
||||
self.ckpt[voicecraft_model]["config"],
|
||||
self.phn2num[voicecraft_model],
|
||||
self.text_tokenizer,
|
||||
self.audio_tokenizer,
|
||||
audio_fn,
|
||||
orig_transcript_until_cutoff_time.strip()
|
||||
+ " "
|
||||
+ target_transcript.strip(),
|
||||
self.device,
|
||||
decode_config,
|
||||
prompt_end_frame,
|
||||
)
|
||||
else:
|
||||
_, gen_audio = inference_one_sample_editing(
|
||||
self.models[voicecraft_model],
|
||||
self.ckpt[voicecraft_model]["config"],
|
||||
self.phn2num[voicecraft_model],
|
||||
self.text_tokenizer,
|
||||
self.audio_tokenizer,
|
||||
audio_fn,
|
||||
target_transcript,
|
||||
mask_interval,
|
||||
self.device,
|
||||
decode_config,
|
||||
)
|
||||
|
||||
# save segments for comparison
|
||||
gen_audio = gen_audio[0].cpu()
|
||||
|
||||
out = "/tmp/out.wav"
|
||||
torchaudio.save(out, gen_audio, codec_audio_sr)
|
||||
return ModelOutput(
|
||||
generated_audio=Path(out), whisper_transcript_orig_audio=whisper_transcript
|
||||
)
|
||||
|
||||
|
||||
def seed_everything(seed):
|
||||
os.environ["PYTHONHASHSEED"] = str(seed)
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
torch.cuda.manual_seed(seed)
|
||||
torch.backends.cudnn.benchmark = False
|
||||
torch.backends.cudnn.deterministic = True
|
||||
|
||||
|
||||
def get_transcribe_state(segments):
|
||||
words_info = [word_info for segment in segments for word_info in segment["words"]]
|
||||
return {
|
||||
"transcript": " ".join([segment["text"].strip() for segment in segments]),
|
||||
"word_bounds": [
|
||||
{"word": word["word"], "start": word["start"], "end": word["end"]}
|
||||
for word in words_info
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def find_closest_cut_off_word(word_bounds, cut_off_sec):
|
||||
min_distance = float("inf")
|
||||
|
||||
for i, word_bound in enumerate(word_bounds):
|
||||
distance = abs(word_bound["start"] - cut_off_sec)
|
||||
|
||||
if distance < min_distance:
|
||||
min_distance = distance
|
||||
|
||||
if word_bound["end"] > cut_off_sec:
|
||||
break
|
||||
|
||||
return i
|
||||
|
||||
|
||||
def get_mask_interval_from_word_bounds(word_bounds, word_span_ind, editType):
|
||||
tmp = word_span_ind.split(",")
|
||||
s, e = int(tmp[0]), int(tmp[-1])
|
||||
start = None
|
||||
for j, item in enumerate(word_bounds):
|
||||
if j == s:
|
||||
if editType == "insertion":
|
||||
start = float(item["end"])
|
||||
else:
|
||||
start = float(item["start"])
|
||||
if j == e:
|
||||
if editType == "insertion":
|
||||
end = float(item["start"])
|
||||
else:
|
||||
end = float(item["end"])
|
||||
assert start is not None
|
||||
break
|
||||
return (start, end)
|
|
@ -78,7 +78,6 @@ class Trainer:
|
|||
|
||||
if self.rank == 0 and self.progress['step'] % self.args.tb_write_every_n_steps == 0:
|
||||
self.writer.add_scalar("train/lr", cur_lr, self.progress['step'])
|
||||
self.wandb.log({"train/lr": cur_lr}, step=self.progress['step'])
|
||||
|
||||
all_inds = list(range(len(batch['y'])))
|
||||
sum_losses = 0
|
||||
|
|
|
@ -0,0 +1,216 @@
|
|||
"""
|
||||
This script will allow you to run TTS inference with Voicecraft
|
||||
Before getting started, be sure to follow the environment setup.
|
||||
"""
|
||||
|
||||
from inference_tts_scale import inference_one_sample
|
||||
from models import voicecraft
|
||||
from data.tokenizer import (
|
||||
AudioTokenizer,
|
||||
TextTokenizer,
|
||||
)
|
||||
import argparse
|
||||
import random
|
||||
import numpy as np
|
||||
import torchaudio
|
||||
import torch
|
||||
import os
|
||||
os.environ["USER"] = "me" # TODO change this to your username
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
|
||||
|
||||
def parse_arguments():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="VoiceCraft TTS Inference: see the script for more information on the options")
|
||||
|
||||
parser.add_argument("-m", "--model_name", type=str, default="giga830M", choices=[
|
||||
"giga330M", "giga830M", "giga330M_TTSEnhanced", "giga830M_TTSEnhanced"],
|
||||
help="VoiceCraft model to use")
|
||||
parser.add_argument("-st", "--silence_tokens", type=int, nargs="*",
|
||||
default=[1388, 1898, 131], help="Silence token IDs")
|
||||
parser.add_argument("-casr", "--codec_audio_sr", type=int,
|
||||
default=16000, help="Codec audio sample rate.")
|
||||
parser.add_argument("-csr", "--codec_sr", type=int, default=50,
|
||||
help="Codec sample rate.")
|
||||
|
||||
parser.add_argument("-k", "--top_k", type=float,
|
||||
default=0, help="Top k value.")
|
||||
parser.add_argument("-p", "--top_p", type=float,
|
||||
default=0.8, help="Top p value.")
|
||||
parser.add_argument("-t", "--temperature", type=float,
|
||||
default=1, help="Temperature value.")
|
||||
parser.add_argument("-kv", "--kvcache", type=float, choices=[0, 1],
|
||||
default=0, help="Kvcache value.")
|
||||
parser.add_argument("-sr", "--stop_repetition", type=int,
|
||||
default=-1, help="Stop repetition for generation")
|
||||
parser.add_argument("--sample_batch_size", type=int,
|
||||
default=3, help="Batch size for sampling")
|
||||
parser.add_argument("-s", "--seed", type=int,
|
||||
default=1, help="Seed value.")
|
||||
parser.add_argument("-bs", "--beam_size", type=int, default=50,
|
||||
help="beam size for MFA alignment")
|
||||
parser.add_argument("-rbs", "--retry_beam_size", type=int, default=200,
|
||||
help="retry beam size for MFA alignment")
|
||||
parser.add_argument("--output_dir", type=str, default="./generated_tts",
|
||||
help="directory to save generated audio")
|
||||
parser.add_argument("-oa", "--original_audio", type=str,
|
||||
default="./demo/5895_34622_000026_000002.wav", help="location of audio file")
|
||||
parser.add_argument("-ot", "--original_transcript", type=str,
|
||||
default="Gwynplaine had, besides, for his work and for his feats of strength, round his neck and over his shoulders, an esclavine of leather.",
|
||||
help="original transcript")
|
||||
parser.add_argument("-tt", "--target_transcript", type=str,
|
||||
default="I cannot believe that the same model can also do text to speech synthesis too!",
|
||||
help="target transcript")
|
||||
parser.add_argument("-co", "--cut_off_sec", type=float, default=3.6,
|
||||
help="cut off point in seconds for input prompt")
|
||||
parser.add_argument("-ma", "--margin", type=float, default=0.04,
|
||||
help="margin in seconds between the end of the cutoff words and the start of the next word. If the next word is not immediately following the cutoff word, the algorithm is more tolerant to word alignment errors")
|
||||
parser.add_argument("-cuttol", "--cutoff_tolerance", type=float, default=1, help="tolerance in seconds for the cutoff time, if given cut_off_sec plus the tolerance, we still are not able to find the next word, we will use the best cutoff time found, i.e. likely no margin or very small margin between the end of the cutoff word and the start of the next word")
|
||||
|
||||
args = parser.parse_args()
|
||||
return args
|
||||
|
||||
|
||||
args = parse_arguments()
|
||||
voicecraft_name = args.model_name
|
||||
# hyperparameters for inference
|
||||
codec_audio_sr = args.codec_audio_sr
|
||||
codec_sr = args.codec_sr
|
||||
top_k = args.top_k
|
||||
top_p = args.top_p # defaults to 0.9 can also try 0.8, but 0.9 seems to work better
|
||||
temperature = args.temperature
|
||||
silence_tokens = args.silence_tokens
|
||||
kvcache = args.kvcache # NOTE if OOM, change this to 0, or try the 330M model
|
||||
|
||||
# NOTE adjust the below three arguments if the generation is not as good
|
||||
# NOTE if the model generate long silence, reduce the stop_repetition to 3, 2 or even 1
|
||||
stop_repetition = args.stop_repetition
|
||||
|
||||
# NOTE: if the if there are long silence or unnaturally strecthed words,
|
||||
# increase sample_batch_size to 4 or higher. What this will do to the model is that the
|
||||
# model will run sample_batch_size examples of the same audio, and pick the one that's the shortest.
|
||||
# So if the speech rate of the generated is too fast change it to a smaller number.
|
||||
sample_batch_size = args.sample_batch_size
|
||||
seed = args.seed # change seed if you are still unhappy with the result
|
||||
|
||||
# load the model
|
||||
if voicecraft_name == "330M":
|
||||
voicecraft_name = "giga330M"
|
||||
elif voicecraft_name == "830M":
|
||||
voicecraft_name = "giga830M"
|
||||
elif voicecraft_name == "330M_TTSEnhanced":
|
||||
voicecraft_name = "330M_TTSEnhanced"
|
||||
elif voicecraft_name == "830M_TTSEnhanced":
|
||||
voicecraft_name = "830M_TTSEnhanced"
|
||||
model = voicecraft.VoiceCraft.from_pretrained(
|
||||
f"pyp1/VoiceCraft_{voicecraft_name.replace('.pth', '')}")
|
||||
phn2num = model.args.phn2num
|
||||
config = vars(model.args)
|
||||
model.to(device)
|
||||
|
||||
encodec_fn = "./pretrained_models/encodec_4cb2048_giga.th"
|
||||
if not os.path.exists(encodec_fn):
|
||||
os.system(
|
||||
f"wget https://huggingface.co/pyp1/VoiceCraft/resolve/main/encodec_4cb2048_giga.th -O ./pretrained_models/encodec_4cb2048_giga.th")
|
||||
# will also put the neural codec model on gpu
|
||||
audio_tokenizer = AudioTokenizer(signature=encodec_fn, device=device)
|
||||
|
||||
text_tokenizer = TextTokenizer(backend="espeak")
|
||||
|
||||
# Prepare your audio
|
||||
# point to the original audio whose speech you want to clone
|
||||
# write down the transcript for the file, or run whisper to get the transcript (and you can modify it if it's not accurate), save it as a .txt file
|
||||
orig_audio = args.original_audio
|
||||
orig_transcript = args.original_transcript
|
||||
|
||||
# move the audio and transcript to temp folder
|
||||
temp_folder = "./demo/temp"
|
||||
os.makedirs(temp_folder, exist_ok=True)
|
||||
os.system(f"cp {orig_audio} {temp_folder}")
|
||||
filename = os.path.splitext(orig_audio.split("/")[-1])[0]
|
||||
with open(f"{temp_folder}/{filename}.txt", "w") as f:
|
||||
f.write(orig_transcript)
|
||||
# run MFA to get the alignment
|
||||
align_temp = f"{temp_folder}/mfa_alignments"
|
||||
beam_size = args.beam_size
|
||||
retry_beam_size = args.retry_beam_size
|
||||
alignments = f"{temp_folder}/mfa_alignments/{filename}.csv"
|
||||
if not os.path.isfile(alignments):
|
||||
os.system(f"mfa align -v --clean -j 1 --output_format csv {temp_folder} \
|
||||
english_us_arpa english_us_arpa {align_temp} --beam {beam_size} --retry_beam {retry_beam_size}")
|
||||
# if the above fails, it could be because the audio is too hard for the alignment model,
|
||||
# increasing the beam_size and retry_beam_size usually solves the issue
|
||||
|
||||
def find_closest_word_boundary(alignments, cut_off_sec, margin, cutoff_tolerance = 1):
|
||||
with open(alignments, 'r') as file:
|
||||
# skip header
|
||||
next(file)
|
||||
cutoff_time = None
|
||||
cutoff_index = None
|
||||
cutoff_time_best = None
|
||||
cutoff_index_best = None
|
||||
lines = [l for l in file.readlines()]
|
||||
for i, line in enumerate(lines):
|
||||
end = float(line.strip().split(',')[1])
|
||||
if end >= cut_off_sec and cutoff_time == None:
|
||||
cutoff_time = end
|
||||
cutoff_index = i
|
||||
if end >= cut_off_sec and end < cut_off_sec + cutoff_tolerance and float(lines[i+1].strip().split(',')[0]) - end >= margin:
|
||||
cutoff_time_best = end + margin * 2 / 3
|
||||
cutoff_index_best = i
|
||||
break
|
||||
if cutoff_time_best != None:
|
||||
cutoff_time = cutoff_time_best
|
||||
cutoff_index = cutoff_index_best
|
||||
return cutoff_time, cutoff_index
|
||||
|
||||
# take a look at demo/temp/mfa_alignment, decide which part of the audio to use as prompt
|
||||
# NOTE: according to forced-alignment file demo/temp/mfa_alignments/5895_34622_000026_000002.wav, the word "strength" stop as 3.561 sec, so we use first 3.6 sec as the prompt. this should be different for different audio
|
||||
cut_off_sec = args.cut_off_sec
|
||||
margin = args.margin
|
||||
audio_fn = f"{temp_folder}/{filename}.wav"
|
||||
|
||||
cut_off_sec, cut_off_word_idx = find_closest_word_boundary(alignments, cut_off_sec, margin, args.cutoff_tolerance)
|
||||
target_transcript = " ".join(orig_transcript.split(" ")[:cut_off_word_idx+1]) + " " + args.target_transcript
|
||||
# NOTE: 3 sec of reference is generally enough for high quality voice cloning, but longer is generally better, try e.g. 3~6 sec.
|
||||
info = torchaudio.info(audio_fn)
|
||||
audio_dur = info.num_frames / info.sample_rate
|
||||
|
||||
assert cut_off_sec < audio_dur, f"cut_off_sec {cut_off_sec} is larger than the audio duration {audio_dur}"
|
||||
prompt_end_frame = int(cut_off_sec * info.sample_rate)
|
||||
|
||||
|
||||
def seed_everything(seed):
|
||||
os.environ['PYTHONHASHSEED'] = str(seed)
|
||||
random.seed(seed)
|
||||
np.random.seed(seed)
|
||||
torch.manual_seed(seed)
|
||||
torch.cuda.manual_seed(seed)
|
||||
torch.backends.cudnn.benchmark = False
|
||||
torch.backends.cudnn.deterministic = True
|
||||
|
||||
|
||||
seed_everything(seed)
|
||||
|
||||
# inference
|
||||
decode_config = {'top_k': top_k, 'top_p': top_p, 'temperature': temperature, 'stop_repetition': stop_repetition, 'kvcache': kvcache,
|
||||
"codec_audio_sr": codec_audio_sr, "codec_sr": codec_sr, "silence_tokens": silence_tokens, "sample_batch_size": sample_batch_size}
|
||||
concated_audio, gen_audio = inference_one_sample(model, argparse.Namespace(
|
||||
**config), phn2num, text_tokenizer, audio_tokenizer, audio_fn, target_transcript, device, decode_config, prompt_end_frame)
|
||||
|
||||
# save segments for comparison
|
||||
concated_audio, gen_audio = concated_audio[0].cpu(), gen_audio[0].cpu()
|
||||
# logging.info(f"length of the resynthesize orig audio: {orig_audio.shape}")
|
||||
|
||||
# save the audio
|
||||
# output_dir
|
||||
output_dir = args.output_dir
|
||||
os.makedirs(output_dir, exist_ok=True)
|
||||
seg_save_fn_gen = f"{output_dir}/{os.path.basename(audio_fn)[:-4]}_gen_seed{seed}.wav"
|
||||
seg_save_fn_concat = f"{output_dir}/{os.path.basename(audio_fn)[:-4]}_concat_seed{seed}.wav"
|
||||
|
||||
torchaudio.save(seg_save_fn_gen, gen_audio, codec_audio_sr)
|
||||
torchaudio.save(seg_save_fn_concat, concated_audio, codec_audio_sr)
|
||||
|
||||
# you might get warnings like WARNING:phonemizer:words count mismatch on 300.0% of the lines (3/1), this can be safely ignored
|
|
@ -42,7 +42,8 @@
|
|||
"\n",
|
||||
"!pip install -e git+https://github.com/facebookresearch/audiocraft.git@c5157b5bf14bf83449c17ea1eeb66c19fb4bc7f0#egg=audiocraft\n",
|
||||
"\n",
|
||||
"!pip install -r \"/content/VoiceCraft/gradio_requirements.txt\""
|
||||
"!pip install -r \"/content/VoiceCraft/gradio_requirements.txt\"\n",
|
||||
"!pip install typer==0.7.0"
|
||||
]
|
||||
},
|
||||
{
|
||||
|
|
Loading…
Reference in New Issue