initial commit
This commit is contained in:
145
py/README.md
Normal file
145
py/README.md
Normal file
@@ -0,0 +1,145 @@
|
||||
# TTS ONNX Inference Examples
|
||||
|
||||
This guide provides examples for running TTS inference using `example_onnx.py`.
|
||||
|
||||
## 📰 Update News
|
||||
|
||||
**2026.01.06** - 🎉 **Supertonic 2** released with multilingual support! Now supports English (`en`), Korean (`ko`), Spanish (`es`), Portuguese (`pt`), and French (`fr`). [Demo](https://huggingface.co/spaces/Supertone/supertonic-2) | [Models](https://huggingface.co/Supertone/supertonic-2)
|
||||
|
||||
**2025.12.10** - Added `supertonic` PyPI package! Install via `pip install supertonic` for a streamlined experience. This is a separate usage method from the ONNX examples in this directory. For more details, visit [supertonic-py documentation](https://supertone-inc.github.io/supertonic-py) and see `example_pypi.py` for usage.
|
||||
|
||||
**2025.12.10** - Added [6 new voice styles](https://huggingface.co/Supertone/supertonic/tree/b10dbaf18b316159be75b34d24f740008fddd381) (M3, M4, M5, F3, F4, F5). See [Voices](https://supertone-inc.github.io/supertonic-py/voices/) for details
|
||||
|
||||
**2025.12.08** - Optimized ONNX models via [OnnxSlim](https://github.com/inisis/OnnxSlim) now available on [Hugging Face Models](https://huggingface.co/Supertone/supertonic)
|
||||
|
||||
**2025.11.23** - Enhanced text preprocessing with comprehensive normalization, emoji removal, symbol replacement, and punctuation handling for improved synthesis quality.
|
||||
|
||||
**2025.11.19** - Added `--speed` parameter to control speech synthesis speed. Adjust the speed factor to make speech faster or slower while maintaining natural quality.
|
||||
|
||||
**2025.11.19** - Added automatic text chunking for long-form inference. Long texts are split into chunks and synthesized with natural pauses.
|
||||
|
||||
## Installation
|
||||
|
||||
This project uses [uv](https://docs.astral.sh/uv/) for fast package management.
|
||||
|
||||
### Install uv (if not already installed)
|
||||
```bash
|
||||
curl -LsSf https://astral.sh/uv/install.sh | sh
|
||||
```
|
||||
|
||||
### Install dependencies
|
||||
```bash
|
||||
uv sync
|
||||
```
|
||||
|
||||
Or if you prefer using traditional pip with requirements.txt:
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Example 1: Default Inference
|
||||
Run inference with default settings:
|
||||
```bash
|
||||
uv run example_onnx.py
|
||||
```
|
||||
|
||||
This will use:
|
||||
- Voice style: `assets/voice_styles/M1.json`
|
||||
- Text: "This morning, I took a walk in the park, and the sound of the birds and the breeze was so pleasant that I stopped for a long time just to listen."
|
||||
- Output directory: `results/`
|
||||
- Total steps: 5
|
||||
- Number of generations: 4
|
||||
|
||||
### Example 2: Batch Inference
|
||||
Process multiple voice styles and texts at once:
|
||||
```bash
|
||||
uv run example_onnx.py \
|
||||
--voice-style assets/voice_styles/M1.json assets/voice_styles/F1.json \
|
||||
--text "The sun sets behind the mountains, painting the sky in shades of pink and orange." "오늘 아침에 공원을 산책했는데, 새소리와 바람 소리가 너무 좋아서 한참을 멈춰 서서 들었어요." \
|
||||
--lang en ko \
|
||||
--batch
|
||||
```
|
||||
|
||||
This will:
|
||||
- Use `--batch` flag to enable batch processing mode
|
||||
- Generate speech for 2 different voice-text pairs
|
||||
- Use male voice style (M1.json) for the first English text
|
||||
- Use female voice style (F1.json) for the second Korean text
|
||||
- Process both samples in a single batch (automatic text chunking disabled)
|
||||
|
||||
### Example 3: High Quality Inference
|
||||
Increase denoising steps for better quality:
|
||||
```bash
|
||||
uv run example_onnx.py \
|
||||
--total-step 10 \
|
||||
--voice-style assets/voice_styles/M1.json \
|
||||
--text "Increasing the number of denoising steps improves the output's fidelity and overall quality."
|
||||
```
|
||||
|
||||
This will:
|
||||
- Use 10 denoising steps instead of the default 5
|
||||
- Produce higher quality output at the cost of slower inference
|
||||
|
||||
### Example 4: Long-Form Inference
|
||||
For long texts, the system automatically chunks the text into manageable segments and generates a single audio file:
|
||||
```bash
|
||||
uv run example_onnx.py \
|
||||
--voice-style assets/voice_styles/M1.json \
|
||||
--text "Once upon a time, in a small village nestled between rolling hills, there lived a young artist named Clara. Every morning, she would wake up before dawn to capture the first light of day. The golden rays streaming through her window inspired countless paintings. Her work was known throughout the region for its vibrant colors and emotional depth. People from far and wide came to see her gallery, and many said her paintings could tell stories that words never could."
|
||||
```
|
||||
|
||||
This will:
|
||||
- Automatically split the long text into smaller chunks (max 300 characters by default)
|
||||
- Process each chunk separately while maintaining natural speech flow
|
||||
- Insert brief silences (0.3 seconds) between chunks for natural pacing
|
||||
- Combine all chunks into a single output audio file
|
||||
|
||||
**Note**: When using batch mode (`--batch`), automatic text chunking is disabled. Use non-batch mode for long-form text synthesis.
|
||||
|
||||
### Example 5: Adjusting Speech Speed
|
||||
Control the speed of speech synthesis:
|
||||
```bash
|
||||
# Faster speech (speed > 1.0)
|
||||
uv run example_onnx.py \
|
||||
--voice-style assets/voice_styles/F2.json \
|
||||
--text "This text will be synthesized at a faster pace." \
|
||||
--speed 1.2
|
||||
|
||||
# Slower speech (speed < 1.0)
|
||||
uv run example_onnx.py \
|
||||
--voice-style assets/voice_styles/M2.json \
|
||||
--text "This text will be synthesized at a slower, more deliberate pace." \
|
||||
--speed 0.9
|
||||
```
|
||||
|
||||
This will:
|
||||
- Use `--speed 1.2` to generate faster speech
|
||||
- Use `--speed 0.9` to generate slower speech
|
||||
- Default speed is 1.05 if not specified
|
||||
- Recommended speed range is between 0.9 and 1.5 for natural-sounding results
|
||||
|
||||
## Available Arguments
|
||||
|
||||
| Argument | Type | Default | Description |
|
||||
|----------|------|---------|-------------|
|
||||
| `--use-gpu` | flag | False | Use GPU for inference (with CPU fallback) |
|
||||
| `--onnx-dir` | str | `assets/onnx` | Path to ONNX model directory |
|
||||
| `--total-step` | int | 5 | Number of denoising steps (higher = better quality, slower) |
|
||||
| `--speed` | float | 1.05 | Speech speed factor (higher = faster, lower = slower) |
|
||||
| `--n-test` | int | 4 | Number of times to generate each sample |
|
||||
| `--voice-style` | str+ | `assets/voice_styles/M1.json` | Voice style file path(s) |
|
||||
| `--text` | str+ | (long default text) | Text(s) to synthesize |
|
||||
| `--lang` | str+ | `en` | Language(s) for text(s): `en`, `ko`, `es`, `pt`, `fr` |
|
||||
| `--save-dir` | str | `results` | Output directory |
|
||||
| `--batch` | flag | False | Enable batch mode (disables automatic text chunking) |
|
||||
|
||||
## Notes
|
||||
|
||||
- **Batch Processing**: The number of `--voice-style` files must match the number of `--text` entries
|
||||
- **Multilingual Support**: Use `--lang` to specify language(s). Available: `en` (English), `ko` (Korean), `es` (Spanish), `pt` (Portuguese), `fr` (French)
|
||||
- **Long-Form Inference**: Without `--batch` flag, long texts are automatically chunked and combined into a single audio file with natural pauses
|
||||
- **Quality vs Speed**: Higher `--total-step` values produce better quality but take longer
|
||||
- **GPU Support**: GPU mode is not supported yet
|
||||
|
||||
116
py/example_onnx.py
Normal file
116
py/example_onnx.py
Normal file
@@ -0,0 +1,116 @@
|
||||
import argparse
|
||||
import os
|
||||
|
||||
import soundfile as sf
|
||||
|
||||
from helper import load_text_to_speech, timer, sanitize_filename, load_voice_style
|
||||
|
||||
|
||||
def parse_args():
|
||||
parser = argparse.ArgumentParser(description="TTS Inference with ONNX")
|
||||
|
||||
# Device settings
|
||||
parser.add_argument(
|
||||
"--use-gpu", action="store_true", help="Use GPU for inference (default: CPU)"
|
||||
)
|
||||
|
||||
# Model settings
|
||||
parser.add_argument(
|
||||
"--onnx-dir",
|
||||
type=str,
|
||||
default="assets/onnx",
|
||||
help="Path to ONNX model directory",
|
||||
)
|
||||
|
||||
# Synthesis parameters
|
||||
parser.add_argument(
|
||||
"--total-step", type=int, default=5, help="Number of denoising steps"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--speed",
|
||||
type=float,
|
||||
default=1.05,
|
||||
help="Speech speed (default: 1.05, higher = faster)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--n-test", type=int, default=4, help="Number of times to generate"
|
||||
)
|
||||
|
||||
# Batch processing
|
||||
parser.add_argument("--batch", action="store_true", help="Batch processing")
|
||||
|
||||
# Input/Output
|
||||
parser.add_argument(
|
||||
"--voice-style",
|
||||
type=str,
|
||||
nargs="+",
|
||||
default=["assets/voice_styles/M1.json"],
|
||||
help="Voice style file path(s). Can specify multiple files for batch processing",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--text",
|
||||
type=str,
|
||||
nargs="+",
|
||||
default=[
|
||||
"This morning, I took a walk in the park, and the sound of the birds and the breeze was so pleasant that I stopped for a long time just to listen."
|
||||
],
|
||||
help="Text(s) to synthesize. Can specify multiple texts for batch processing",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--lang",
|
||||
type=str,
|
||||
nargs="+",
|
||||
default=["en"],
|
||||
help="Language(s) of the text(s). Can specify multiple languages for batch processing",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--save-dir", type=str, default="results", help="Output directory"
|
||||
)
|
||||
|
||||
return parser.parse_args()
|
||||
|
||||
|
||||
print("=== TTS Inference with ONNX Runtime (Python) ===\n")
|
||||
|
||||
# --- 1. Parse arguments --- #
|
||||
args = parse_args()
|
||||
total_step = args.total_step
|
||||
speed = args.speed
|
||||
n_test = args.n_test
|
||||
save_dir = args.save_dir
|
||||
voice_style_paths = args.voice_style
|
||||
text_list = args.text
|
||||
lang_list = args.lang
|
||||
batch = args.batch
|
||||
|
||||
assert len(voice_style_paths) == len(
|
||||
text_list
|
||||
), f"Number of voice styles ({len(voice_style_paths)}) must match number of texts ({len(text_list)})"
|
||||
bsz = len(voice_style_paths)
|
||||
|
||||
# --- 2. Load Text to Speech --- #
|
||||
text_to_speech = load_text_to_speech(args.onnx_dir, args.use_gpu)
|
||||
|
||||
# --- 3. Load Voice Style --- #
|
||||
style = load_voice_style(voice_style_paths, verbose=True)
|
||||
|
||||
# --- 4. Synthesize Speech --- #
|
||||
for n in range(n_test):
|
||||
print(f"\n[{n+1}/{n_test}] Starting synthesis...")
|
||||
with timer("Generating speech from text"):
|
||||
if batch:
|
||||
wav, duration = text_to_speech.batch(
|
||||
text_list, lang_list, style, total_step, speed
|
||||
)
|
||||
else:
|
||||
wav, duration = text_to_speech(
|
||||
text_list[0], lang_list[0], style, total_step, speed
|
||||
)
|
||||
if not os.path.exists(save_dir):
|
||||
os.makedirs(save_dir)
|
||||
for b in range(bsz):
|
||||
fname = f"{sanitize_filename(text_list[b], 20)}_{n+1}.wav"
|
||||
w = wav[b, : int(text_to_speech.sample_rate * duration[b].item())] # [T_trim]
|
||||
sf.write(os.path.join(save_dir, fname), w, text_to_speech.sample_rate)
|
||||
print(f"Saved: {save_dir}/{fname}")
|
||||
print("\n=== Synthesis completed successfully! ===")
|
||||
16
py/example_pypi.py
Normal file
16
py/example_pypi.py
Normal file
@@ -0,0 +1,16 @@
|
||||
from supertonic import TTS
|
||||
|
||||
# Note: First run downloads model automatically (~260MB)
|
||||
tts = TTS(auto_download=True)
|
||||
|
||||
# Get a voice style
|
||||
style = tts.get_voice_style(voice_name="M4")
|
||||
|
||||
# Generate speech
|
||||
text = "This morning, I took a walk in the park, and the sound of the birds and the breeze was so pleasant that I stopped for a long time just to listen."
|
||||
wav, duration = tts.synthesize(text, voice_style=style)
|
||||
# wav: np.ndarray, shape = (1, num_samples)
|
||||
# duration: np.ndarray, shape = (1,)
|
||||
|
||||
# Save to file
|
||||
tts.save_audio(wav, "results/example_pypi.wav")
|
||||
429
py/helper.py
Normal file
429
py/helper.py
Normal file
@@ -0,0 +1,429 @@
|
||||
import json
|
||||
import os
|
||||
import time
|
||||
from contextlib import contextmanager
|
||||
from typing import Optional
|
||||
from unicodedata import normalize
|
||||
|
||||
import numpy as np
|
||||
import onnxruntime as ort
|
||||
|
||||
import re
|
||||
|
||||
AVAILABLE_LANGS = ["en", "ko", "es", "pt", "fr"]
|
||||
|
||||
|
||||
class UnicodeProcessor:
|
||||
def __init__(self, unicode_indexer_path: str):
|
||||
with open(unicode_indexer_path, "r") as f:
|
||||
self.indexer = json.load(f)
|
||||
|
||||
def _preprocess_text(self, text: str, lang: str) -> str:
|
||||
# TODO: Need advanced normalizer for better performance
|
||||
text = normalize("NFKD", text)
|
||||
|
||||
# Remove emojis (wide Unicode range)
|
||||
emoji_pattern = re.compile(
|
||||
"[\U0001f600-\U0001f64f" # emoticons
|
||||
"\U0001f300-\U0001f5ff" # symbols & pictographs
|
||||
"\U0001f680-\U0001f6ff" # transport & map symbols
|
||||
"\U0001f700-\U0001f77f"
|
||||
"\U0001f780-\U0001f7ff"
|
||||
"\U0001f800-\U0001f8ff"
|
||||
"\U0001f900-\U0001f9ff"
|
||||
"\U0001fa00-\U0001fa6f"
|
||||
"\U0001fa70-\U0001faff"
|
||||
"\u2600-\u26ff"
|
||||
"\u2700-\u27bf"
|
||||
"\U0001f1e6-\U0001f1ff]+",
|
||||
flags=re.UNICODE,
|
||||
)
|
||||
text = emoji_pattern.sub("", text)
|
||||
|
||||
# Replace various dashes and symbols
|
||||
replacements = {
|
||||
"–": "-",
|
||||
"‑": "-",
|
||||
"—": "-",
|
||||
"_": " ",
|
||||
"\u201c": '"', # left double quote "
|
||||
"\u201d": '"', # right double quote "
|
||||
"\u2018": "'", # left single quote '
|
||||
"\u2019": "'", # right single quote '
|
||||
"´": "'",
|
||||
"`": "'",
|
||||
"[": " ",
|
||||
"]": " ",
|
||||
"|": " ",
|
||||
"/": " ",
|
||||
"#": " ",
|
||||
"→": " ",
|
||||
"←": " ",
|
||||
}
|
||||
for k, v in replacements.items():
|
||||
text = text.replace(k, v)
|
||||
|
||||
# Remove special symbols
|
||||
text = re.sub(r"[♥☆♡©\\]", "", text)
|
||||
|
||||
# Replace known expressions
|
||||
expr_replacements = {
|
||||
"@": " at ",
|
||||
"e.g.,": "for example, ",
|
||||
"i.e.,": "that is, ",
|
||||
}
|
||||
for k, v in expr_replacements.items():
|
||||
text = text.replace(k, v)
|
||||
|
||||
# Fix spacing around punctuation
|
||||
text = re.sub(r" ,", ",", text)
|
||||
text = re.sub(r" \.", ".", text)
|
||||
text = re.sub(r" !", "!", text)
|
||||
text = re.sub(r" \?", "?", text)
|
||||
text = re.sub(r" ;", ";", text)
|
||||
text = re.sub(r" :", ":", text)
|
||||
text = re.sub(r" '", "'", text)
|
||||
|
||||
# Remove duplicate quotes
|
||||
while '""' in text:
|
||||
text = text.replace('""', '"')
|
||||
while "''" in text:
|
||||
text = text.replace("''", "'")
|
||||
while "``" in text:
|
||||
text = text.replace("``", "`")
|
||||
|
||||
# Remove extra spaces
|
||||
text = re.sub(r"\s+", " ", text).strip()
|
||||
|
||||
# If text doesn't end with punctuation, quotes, or closing brackets, add a period
|
||||
if not re.search(r"[.!?;:,'\"')\]}…。」』】〉》›»]$", text):
|
||||
text += "."
|
||||
|
||||
if lang not in AVAILABLE_LANGS:
|
||||
raise ValueError(f"Invalid language: {lang}")
|
||||
text = f"<{lang}>" + text + f"</{lang}>"
|
||||
return text
|
||||
|
||||
def _get_text_mask(self, text_ids_lengths: np.ndarray) -> np.ndarray:
|
||||
text_mask = length_to_mask(text_ids_lengths)
|
||||
return text_mask
|
||||
|
||||
def _text_to_unicode_values(self, text: str) -> np.ndarray:
|
||||
unicode_values = np.array(
|
||||
[ord(char) for char in text], dtype=np.uint16
|
||||
) # 2 bytes
|
||||
return unicode_values
|
||||
|
||||
def __call__(
|
||||
self, text_list: list[str], lang_list: list[str]
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
text_list = [
|
||||
self._preprocess_text(t, lang) for t, lang in zip(text_list, lang_list)
|
||||
]
|
||||
text_ids_lengths = np.array([len(text) for text in text_list], dtype=np.int64)
|
||||
text_ids = np.zeros((len(text_list), text_ids_lengths.max()), dtype=np.int64)
|
||||
for i, text in enumerate(text_list):
|
||||
unicode_vals = self._text_to_unicode_values(text)
|
||||
text_ids[i, : len(unicode_vals)] = np.array(
|
||||
[self.indexer[val] for val in unicode_vals], dtype=np.int64
|
||||
)
|
||||
text_mask = self._get_text_mask(text_ids_lengths)
|
||||
return text_ids, text_mask
|
||||
|
||||
|
||||
class Style:
|
||||
def __init__(self, style_ttl_onnx: np.ndarray, style_dp_onnx: np.ndarray):
|
||||
self.ttl = style_ttl_onnx
|
||||
self.dp = style_dp_onnx
|
||||
|
||||
|
||||
class TextToSpeech:
|
||||
def __init__(
|
||||
self,
|
||||
cfgs: dict,
|
||||
text_processor: UnicodeProcessor,
|
||||
dp_ort: ort.InferenceSession,
|
||||
text_enc_ort: ort.InferenceSession,
|
||||
vector_est_ort: ort.InferenceSession,
|
||||
vocoder_ort: ort.InferenceSession,
|
||||
):
|
||||
self.cfgs = cfgs
|
||||
self.text_processor = text_processor
|
||||
self.dp_ort = dp_ort
|
||||
self.text_enc_ort = text_enc_ort
|
||||
self.vector_est_ort = vector_est_ort
|
||||
self.vocoder_ort = vocoder_ort
|
||||
self.sample_rate = cfgs["ae"]["sample_rate"]
|
||||
self.base_chunk_size = cfgs["ae"]["base_chunk_size"]
|
||||
self.chunk_compress_factor = cfgs["ttl"]["chunk_compress_factor"]
|
||||
self.ldim = cfgs["ttl"]["latent_dim"]
|
||||
|
||||
def sample_noisy_latent(
|
||||
self, duration: np.ndarray
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
bsz = len(duration)
|
||||
wav_len_max = duration.max() * self.sample_rate
|
||||
wav_lengths = (duration * self.sample_rate).astype(np.int64)
|
||||
chunk_size = self.base_chunk_size * self.chunk_compress_factor
|
||||
latent_len = ((wav_len_max + chunk_size - 1) / chunk_size).astype(np.int32)
|
||||
latent_dim = self.ldim * self.chunk_compress_factor
|
||||
noisy_latent = np.random.randn(bsz, latent_dim, latent_len).astype(np.float32)
|
||||
latent_mask = get_latent_mask(
|
||||
wav_lengths, self.base_chunk_size, self.chunk_compress_factor
|
||||
)
|
||||
noisy_latent = noisy_latent * latent_mask
|
||||
return noisy_latent, latent_mask
|
||||
|
||||
def _infer(
|
||||
self,
|
||||
text_list: list[str],
|
||||
lang_list: list[str],
|
||||
style: Style,
|
||||
total_step: int,
|
||||
speed: float = 1.05,
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
assert (
|
||||
len(text_list) == style.ttl.shape[0]
|
||||
), "Number of texts must match number of style vectors"
|
||||
bsz = len(text_list)
|
||||
text_ids, text_mask = self.text_processor(text_list, lang_list)
|
||||
dur_onnx, *_ = self.dp_ort.run(
|
||||
None, {"text_ids": text_ids, "style_dp": style.dp, "text_mask": text_mask}
|
||||
)
|
||||
dur_onnx = dur_onnx / speed
|
||||
text_emb_onnx, *_ = self.text_enc_ort.run(
|
||||
None,
|
||||
{"text_ids": text_ids, "style_ttl": style.ttl, "text_mask": text_mask},
|
||||
) # dur_onnx: [bsz]
|
||||
xt, latent_mask = self.sample_noisy_latent(dur_onnx)
|
||||
total_step_np = np.array([total_step] * bsz, dtype=np.float32)
|
||||
for step in range(total_step):
|
||||
current_step = np.array([step] * bsz, dtype=np.float32)
|
||||
xt, *_ = self.vector_est_ort.run(
|
||||
None,
|
||||
{
|
||||
"noisy_latent": xt,
|
||||
"text_emb": text_emb_onnx,
|
||||
"style_ttl": style.ttl,
|
||||
"text_mask": text_mask,
|
||||
"latent_mask": latent_mask,
|
||||
"current_step": current_step,
|
||||
"total_step": total_step_np,
|
||||
},
|
||||
)
|
||||
wav, *_ = self.vocoder_ort.run(None, {"latent": xt})
|
||||
return wav, dur_onnx
|
||||
|
||||
def __call__(
|
||||
self,
|
||||
text: str,
|
||||
lang: str,
|
||||
style: Style,
|
||||
total_step: int,
|
||||
speed: float = 1.05,
|
||||
silence_duration: float = 0.3,
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
assert (
|
||||
style.ttl.shape[0] == 1
|
||||
), "Single speaker text to speech only supports single style"
|
||||
max_len = 120 if lang == "ko" else 300
|
||||
text_list = chunk_text(text, max_len=max_len)
|
||||
wav_cat = None
|
||||
dur_cat = None
|
||||
for text in text_list:
|
||||
wav, dur_onnx = self._infer([text], [lang], style, total_step, speed)
|
||||
if wav_cat is None:
|
||||
wav_cat = wav
|
||||
dur_cat = dur_onnx
|
||||
else:
|
||||
silence = np.zeros(
|
||||
(1, int(silence_duration * self.sample_rate)), dtype=np.float32
|
||||
)
|
||||
wav_cat = np.concatenate([wav_cat, silence, wav], axis=1)
|
||||
dur_cat += dur_onnx + silence_duration
|
||||
return wav_cat, dur_cat
|
||||
|
||||
def batch(
|
||||
self,
|
||||
text_list: list[str],
|
||||
lang_list: list[str],
|
||||
style: Style,
|
||||
total_step: int,
|
||||
speed: float = 1.05,
|
||||
) -> tuple[np.ndarray, np.ndarray]:
|
||||
return self._infer(text_list, lang_list, style, total_step, speed)
|
||||
|
||||
|
||||
def length_to_mask(lengths: np.ndarray, max_len: Optional[int] = None) -> np.ndarray:
|
||||
"""
|
||||
Convert lengths to binary mask.
|
||||
|
||||
Args:
|
||||
lengths: (B,)
|
||||
max_len: int
|
||||
|
||||
Returns:
|
||||
mask: (B, 1, max_len)
|
||||
"""
|
||||
max_len = max_len or lengths.max()
|
||||
ids = np.arange(0, max_len)
|
||||
mask = (ids < np.expand_dims(lengths, axis=1)).astype(np.float32)
|
||||
return mask.reshape(-1, 1, max_len)
|
||||
|
||||
|
||||
def get_latent_mask(
|
||||
wav_lengths: np.ndarray, base_chunk_size: int, chunk_compress_factor: int
|
||||
) -> np.ndarray:
|
||||
latent_size = base_chunk_size * chunk_compress_factor
|
||||
latent_lengths = (wav_lengths + latent_size - 1) // latent_size
|
||||
latent_mask = length_to_mask(latent_lengths)
|
||||
return latent_mask
|
||||
|
||||
|
||||
def load_onnx(
|
||||
onnx_path: str, opts: ort.SessionOptions, providers: list[str]
|
||||
) -> ort.InferenceSession:
|
||||
return ort.InferenceSession(onnx_path, sess_options=opts, providers=providers)
|
||||
|
||||
|
||||
def load_onnx_all(
|
||||
onnx_dir: str, opts: ort.SessionOptions, providers: list[str]
|
||||
) -> tuple[
|
||||
ort.InferenceSession,
|
||||
ort.InferenceSession,
|
||||
ort.InferenceSession,
|
||||
ort.InferenceSession,
|
||||
]:
|
||||
dp_onnx_path = os.path.join(onnx_dir, "duration_predictor.onnx")
|
||||
text_enc_onnx_path = os.path.join(onnx_dir, "text_encoder.onnx")
|
||||
vector_est_onnx_path = os.path.join(onnx_dir, "vector_estimator.onnx")
|
||||
vocoder_onnx_path = os.path.join(onnx_dir, "vocoder.onnx")
|
||||
|
||||
dp_ort = load_onnx(dp_onnx_path, opts, providers)
|
||||
text_enc_ort = load_onnx(text_enc_onnx_path, opts, providers)
|
||||
vector_est_ort = load_onnx(vector_est_onnx_path, opts, providers)
|
||||
vocoder_ort = load_onnx(vocoder_onnx_path, opts, providers)
|
||||
return dp_ort, text_enc_ort, vector_est_ort, vocoder_ort
|
||||
|
||||
|
||||
def load_cfgs(onnx_dir: str) -> dict:
|
||||
cfg_path = os.path.join(onnx_dir, "tts.json")
|
||||
with open(cfg_path, "r") as f:
|
||||
cfgs = json.load(f)
|
||||
return cfgs
|
||||
|
||||
|
||||
def load_text_processor(onnx_dir: str) -> UnicodeProcessor:
|
||||
unicode_indexer_path = os.path.join(onnx_dir, "unicode_indexer.json")
|
||||
text_processor = UnicodeProcessor(unicode_indexer_path)
|
||||
return text_processor
|
||||
|
||||
|
||||
def load_text_to_speech(onnx_dir: str, use_gpu: bool = False) -> TextToSpeech:
|
||||
opts = ort.SessionOptions()
|
||||
if use_gpu:
|
||||
raise NotImplementedError("GPU mode is not fully tested")
|
||||
else:
|
||||
providers = ["CPUExecutionProvider"]
|
||||
print("Using CPU for inference")
|
||||
cfgs = load_cfgs(onnx_dir)
|
||||
dp_ort, text_enc_ort, vector_est_ort, vocoder_ort = load_onnx_all(
|
||||
onnx_dir, opts, providers
|
||||
)
|
||||
text_processor = load_text_processor(onnx_dir)
|
||||
return TextToSpeech(
|
||||
cfgs, text_processor, dp_ort, text_enc_ort, vector_est_ort, vocoder_ort
|
||||
)
|
||||
|
||||
|
||||
def load_voice_style(voice_style_paths: list[str], verbose: bool = False) -> Style:
|
||||
bsz = len(voice_style_paths)
|
||||
|
||||
# Read first file to get dimensions
|
||||
with open(voice_style_paths[0], "r") as f:
|
||||
first_style = json.load(f)
|
||||
ttl_dims = first_style["style_ttl"]["dims"]
|
||||
dp_dims = first_style["style_dp"]["dims"]
|
||||
|
||||
# Pre-allocate arrays with full batch size
|
||||
ttl_style = np.zeros([bsz, ttl_dims[1], ttl_dims[2]], dtype=np.float32)
|
||||
dp_style = np.zeros([bsz, dp_dims[1], dp_dims[2]], dtype=np.float32)
|
||||
|
||||
# Fill in the data
|
||||
for i, voice_style_path in enumerate(voice_style_paths):
|
||||
with open(voice_style_path, "r") as f:
|
||||
voice_style = json.load(f)
|
||||
|
||||
ttl_data = np.array(
|
||||
voice_style["style_ttl"]["data"], dtype=np.float32
|
||||
).flatten()
|
||||
ttl_style[i] = ttl_data.reshape(ttl_dims[1], ttl_dims[2])
|
||||
|
||||
dp_data = np.array(voice_style["style_dp"]["data"], dtype=np.float32).flatten()
|
||||
dp_style[i] = dp_data.reshape(dp_dims[1], dp_dims[2])
|
||||
|
||||
if verbose:
|
||||
print(f"Loaded {bsz} voice styles")
|
||||
return Style(ttl_style, dp_style)
|
||||
|
||||
|
||||
@contextmanager
|
||||
def timer(name: str):
|
||||
start = time.time()
|
||||
print(f"{name}...")
|
||||
yield
|
||||
print(f" -> {name} completed in {time.time() - start:.2f} sec")
|
||||
|
||||
|
||||
def sanitize_filename(text: str, max_len: int) -> str:
|
||||
"""Sanitize filename by replacing non-alphanumeric characters with underscores (supports Unicode)"""
|
||||
import re
|
||||
|
||||
prefix = text[:max_len]
|
||||
# \w matches Unicode word characters (letters, digits, underscore) with re.UNICODE
|
||||
# We replace non-word characters except keeping existing underscores
|
||||
return re.sub(r"[^\w]", "_", prefix, flags=re.UNICODE)
|
||||
|
||||
|
||||
def chunk_text(text: str, max_len: int = 300) -> list[str]:
|
||||
"""
|
||||
Split text into chunks by paragraphs and sentences.
|
||||
|
||||
Args:
|
||||
text: Input text to chunk
|
||||
max_len: Maximum length of each chunk (default: 300)
|
||||
|
||||
Returns:
|
||||
List of text chunks
|
||||
"""
|
||||
import re
|
||||
|
||||
# Split by paragraph (two or more newlines)
|
||||
paragraphs = [p.strip() for p in re.split(r"\n\s*\n+", text.strip()) if p.strip()]
|
||||
|
||||
chunks = []
|
||||
|
||||
for paragraph in paragraphs:
|
||||
paragraph = paragraph.strip()
|
||||
if not paragraph:
|
||||
continue
|
||||
|
||||
# Split by sentence boundaries (period, question mark, exclamation mark followed by space)
|
||||
# But exclude common abbreviations like Mr., Mrs., Dr., etc. and single capital letters like F.
|
||||
pattern = r"(?<!Mr\.)(?<!Mrs\.)(?<!Ms\.)(?<!Dr\.)(?<!Prof\.)(?<!Sr\.)(?<!Jr\.)(?<!Ph\.D\.)(?<!etc\.)(?<!e\.g\.)(?<!i\.e\.)(?<!vs\.)(?<!Inc\.)(?<!Ltd\.)(?<!Co\.)(?<!Corp\.)(?<!St\.)(?<!Ave\.)(?<!Blvd\.)(?<!\b[A-Z]\.)(?<=[.!?])\s+"
|
||||
sentences = re.split(pattern, paragraph)
|
||||
|
||||
current_chunk = ""
|
||||
|
||||
for sentence in sentences:
|
||||
if len(current_chunk) + len(sentence) + 1 <= max_len:
|
||||
current_chunk += (" " if current_chunk else "") + sentence
|
||||
else:
|
||||
if current_chunk:
|
||||
chunks.append(current_chunk.strip())
|
||||
current_chunk = sentence
|
||||
|
||||
if current_chunk:
|
||||
chunks.append(current_chunk.strip())
|
||||
|
||||
return chunks
|
||||
20
py/pyproject.toml
Normal file
20
py/pyproject.toml
Normal file
@@ -0,0 +1,20 @@
|
||||
[project]
|
||||
name = "tts-onnx"
|
||||
version = "1.0.0"
|
||||
description = "TTS ONNX Inference"
|
||||
requires-python = ">=3.10"
|
||||
dependencies = [
|
||||
"onnxruntime==1.23.1",
|
||||
"numpy>=1.26.0",
|
||||
"soundfile>=0.12.1",
|
||||
"librosa>=0.10.0",
|
||||
"PyYAML>=6.0",
|
||||
]
|
||||
|
||||
[tool.setuptools]
|
||||
py-modules = []
|
||||
|
||||
[build-system]
|
||||
requires = ["setuptools"]
|
||||
build-backend = "setuptools.build_meta"
|
||||
|
||||
5
py/requirements.txt
Normal file
5
py/requirements.txt
Normal file
@@ -0,0 +1,5 @@
|
||||
onnxruntime==1.23.1
|
||||
numpy>=1.26.0
|
||||
soundfile>=0.12.1
|
||||
librosa>=0.10.0
|
||||
PyYAML>=6.0
|
||||
1142
py/uv.lock
generated
Normal file
1142
py/uv.lock
generated
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user