* A `dataset.jsonl` file with one line per utterance (JSON objects)
*`phoneme_ids` (required)
* List of ids for each utterance phoneme (0 <= id <`num_symbols`)
*`audio_norm_path` (required)
* Absolute path to [normalized audio](https://github.com/rhasspy/piper/tree/master/src/python/piper_train/norm_audio) file (`.pt`)
*`audio_spec_path` (required)
* Absolute path to [audio spectrogram](https://github.com/rhasspy/piper/blob/fda64e7a5104810a24eb102b880fc5c2ac596a38/src/python/piper_train/vits/mel_processing.py#L40) file (`.pt`)
*`speaker_id` (required for multi-speaker)
* Id of the utterance's speaker (0 <= id <`num_speakers`)
*`audio_path`
* Absolute path to original audio file
*`text`
* Original text of utterance before phonemization
*`phonemes`
* Phonemes from utterance text before converting to ids
*`speaker`
* Name of utterance speaker (from `speaker_id_map`)
### Dataset Format
The pre-processing script expects data to be a directory with:
*`metadata.csv` - CSV file with text, audio filenames, and speaker names
*`wav/` - directory with audio files
The `metadata.csv` file uses `|` as a delimiter, and has 2 or 3 columns depending on if the dataset has a single or multiple speakers.
There is no header row.
For single speaker datasets:
```csv
id|text
```
where `id` is the name of the WAV file in the `wav` directory. For example, an `id` of `1234` means that `wav/1234.wav` should exist.
For multi-speaker datasets:
```csv
id|speaker|text
```
where `speaker` is the name of the utterance's speaker. Speaker ids will automatically be assigned based on the number of utterances per speaker (speaker id 0 has the most utterances).
### Pre-processing
An example of pre-processing a single speaker dataset:
``` sh
python3 -m piper_train.preprocess \
--language en-us \
--input-dir /path/to/dataset_dir/ \
--output-dir /path/to/training_dir/ \
--dataset-format ljspeech \
--single-speaker \
--sample-rate 22050
```
The `--language` argument refers to an [espeak-ng voice](https://github.com/espeak-ng/espeak-ng/) by default, such as `de` for German.
To pre-process a multi-speaker dataset, remove the `--single-speaker` flag and ensure that your dataset has the 3 columns: `id|speaker|text`
Verify the number of speakers in the generated `config.json` file before proceeding.
## Training a Model
Once you have a `config.json`, `dataset.jsonl`, and audio files (`.pt`) from pre-processing, you can begin the training process with `python3 -m piper_train`
For most cases, you should fine-tune from [an existing model](https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main). The model must have the sample audio quality and sample rate, but does not necessarily need to be in the same language.
It is **highly recommended** to train with the following `Dockerfile`:
``` dockerfile
FROM nvcr.io/nvidia/pytorch:22.03-py3
RUN pip3 install \
'pytorch-lightning'
ENV NUMBA_CACHE_DIR=.numba_cache
```
As an example, we will fine-tune the [medium quality lessac voice](https://huggingface.co/datasets/rhasspy/piper-checkpoints/tree/main/en/en_US/lessac/medium). Download the `.ckpt` file and run the following command in your training environment:
Use `--quality high` to train a [larger voice model](https://github.com/rhasspy/piper/blob/master/src/python/piper_train/vits/config.py#L45) (sounds better, but is much slower).
You can adjust the validation split (5% = 0.05) and number of test examples for your specific dataset. For fine-tuning, they are often set to 0 because the target dataset is very small.
Batch size can be tricky to get right. It depends on the size of your GPU's vRAM, the model's quality/size, and the length of the longest sentence in your dataset. The `--max-phoneme-ids <N>` argument to `piper_train` will drop sentences that have more than `N` phoneme ids. In practice, using `--batch-size 32` and `--max-phoneme-ids 400` will work for 24 GB of vRAM (RTX 3090/4090).
### Multi-Speaker Fine-Tuning
If you're training a multi-speaker model, use `--resume_from_single_speaker_checkpoint` instead of `--resume_from_checkpoint`. This will be *much* faster than training your multi-speaker model from scratch.
### Testing
To test your voice during training, you can use [these test sentences](https://github.com/rhasspy/piper/tree/master/etc/test_sentences) or generate your own with [piper-phonemize](https://github.com/rhasspy/piper-phonemize/). Run the following command to generate audio files:
The input format to `piper_train.infer` is the same as `dataset.jsonl`: one line of JSON per utterance with `phoneme_ids` and `speaker_id` (multi-speaker only). Generate your own test file with [piper-phonemize](https://github.com/rhasspy/piper-phonemize/):
Click on the scalars tab and look at both `loss_disc_all` and `loss_gen_all`. In general, the model is "done" when `loss_disc_all` levels off. We've found that 2000 epochs is usually good for models trained from scratch, and an additional 1000 epochs when fine-tuning.
## Exporting a Model
When your model is finished training, export it to onnx with:
```sh
python3 -m piper_train.export_onnx \
/path/to/model.ckpt \
/path/to/model.onnx
cp /path/to/training_dir/config.json \
/path/to/model.onnx.json
```
The [export script](https://github.com/rhasspy/piper-samples/blob/master/_script/export.sh) does additional optimization of the model with [onnx-simplifier](https://github.com/daquexian/onnx-simplifier).
If the export is successful, you can now use your voice with Piper: