DOI QR코드

DOI QR Code

Text-to-speech with linear spectrogram prediction for quality and speed improvement

음질 및 속도 향상을 위한 선형 스펙트로그램 활용 Text-to-speech

  • Yoon, Hyebin (Department of English Language and Literature, Korea University)
  • 윤혜빈 (고려대학교 영어영문학과)
  • Received : 2021.08.01
  • Accepted : 2021.09.19
  • Published : 2021.09.30

Abstract

Most neural-network-based speech synthesis models utilize neural vocoders to convert mel-scaled spectrograms into high-quality, human-like voices. However, neural vocoders combined with mel-scaled spectrogram prediction models demand considerable computer memory and time during the training phase and are subject to slow inference speeds in an environment where GPU is not used. This problem does not arise in linear spectrogram prediction models, as they do not use neural vocoders, but these models suffer from low voice quality. As a solution, this paper proposes a Tacotron 2 and Transformer-based linear spectrogram prediction model that produces high-quality speech and does not use neural vocoders. Experiments suggest that this model can serve as the foundation of a high-quality text-to-speech model with fast inference speed.

인공신경망에 기반한 대부분의 음성 합성 모델은 고음질의 자연스러운 발화를 생성하기 위해 보코더 모델을 사용한다. 보코더 모델은 멜 스펙트로그램 예측 모델과 결합하여 멜 스펙트로그램을 음성으로 변환한다. 그러나 보코더 모델을 사용할 경우에는 많은 양의 컴퓨터 메모리와 훈련 시간이 필요하며, GPU가 제공되지 않는 실제 서비스 환경에서 음성 합성이 오래 걸린다는 단점이 있다. 기존의 선형 스펙트로그램 예측 모델에서는 보코더 모델을 사용하지 않으므로 이 문제가 발생하지 않지만, 대신에 고품질의 음성을 생성하지 못한다. 본 논문은 뉴럴넷 기반 보코더를 사용하지 않으면서도 양질의 음성을 생성하는 Tacotron 2 & Transformer 기반의 선형 스펙트로그램 예측 모델을 제시한다. 본 모델의 성능과 속도 측정 실험을 진행한 결과, 보코더 기반 모델에 비해 성능과 속도 면에서 조금 더 우세한 점을 보였으며, 따라서 고품질의 음성을 빠른 속도로 생성하는 음성 합성 모델 연구의 발판 역할을 할 것으로 기대한다.

Keywords

References

  1. Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein GAN. Retrieved from https://arxiv.org/abs/1701.07875
  2. Chen, J., Tan, X., Luan, J., Qin, T., & Liu, T. Y. (2020). HiFiSinger: Towards high-fidelity neural singing voice synthesis. Retrieved from https://arxiv.org/abs/2009.01776
  3. Griffin, D., & Lim, J. (1984). Signal estimation from modified short-time Fourier transform. IEEE Transactions on Acoustics, Speech, and Signal Processing, 32(2), 236-243. https://doi.org/10.1109/TASSP.1984.1164317
  4. Hsu, P., Wang, C., Liu, A. T., & Lee, H. (2020). Towards robust neural vocoding for speech generation: A survey. Retrieved from https://arxiv.org/abs/1912.02461
  5. Kumar, K., Kumar, R., de Boissiere, T., Gestin, L., Teoh, W. Z., Sotelo, J., de Brebisson, A., ... Courville, A. (2019). MelGAN: Generative adversarial networks for conditional waveform synthesis. Retrieved from https://arxiv.org/abs/1910.06711
  6. Li, N., Liu, S., Liu, Y., Zhao, S., Liu, M., & Zhou, M. (2019). Neural speech synthesis with transformer network. Retrieved from https://arxiv.org/abs/1809.08895
  7. Perraudin, N., Balazs, P., & Sondergaard, P. L. (2013, October). A fast Griffin-Lim algorithm. Proceedings of the 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (pp. 1-4). New Paltz, NY.
  8. Prenger, R., Valle, R., & Catanzaro, B. (2018). WaveGlow: A flow-based generative network for speech synthesis. Retrieved from https://arxiv.org/abs/1811.00002
  9. Ren, Y., Ruan, Y., Tan, X., Qin, T., Zhao, S., Zhao, Z., & Liu, T. Y. (2019, December). FastSpeech: Fast, robust and controllable text to speech. Proceedings of the 33rd Annual Conference on Neural Information Processing Systems(pp. 3156-3164). Vancouver, BC.
  10. Sharma, A., Kumar, P., Maddukuri, V., Madamshetti, N., Kishore, K. G., Kavuru, S. S. S., Raman, B., ... Roy, P. P. (2020). Fast Griffin Lim based waveform generation strategy for text-to-speech synthesis. Multimedia Tools and Applications, 79(41), 30205-30233. https://doi.org/10.1007/s11042-020-09321-7
  11. Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., ... Wu, Y. (2018, April). Natural TTS synthesis by conditioning WaveNet on Mel spectrogram predictions. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 4779-4783). Calgary, AB.
  12. Song, W., Xu, G., Zhang, Z., Zhang, C., He, X., & Zhou, B. (2020, October). Efficient WaveGlow: An improved WaveGlow vocoder with enhanced speed. Proceedings of the 21st Annual Conference of the International Speech Communication Association (pp. 225-229). Shanghai, China.
  13. Tachibana, H., Uenoyama, K., & Aihara, S. (2018, April). Efficiently trainable text-to-speech system based on deep convolutional networks with guided attention. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 4784-4788). Calgary, AB.
  14. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., ... Kavukcuoglu, K. (2016). WaveNet: A generative model for raw audio. Retrieved from https://arxiv.org/abs/1609.03499
  15. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., ... Polosukhin, I. (2017). Attention is all you need. Retrieved from https://arxiv.org/abs/1706.03762
  16. Wang, Y., Skerry-Ryan, R. J., Stanton, D., Wu, Y., Weiss, R. J., Jaitly, N., Yang, Z., ... Saurous, R. A. (2017, August). Tacotron: Towards end-to-end speech synthesis. Proceedings of the 18th Annual Conference of the International Speech Communication Association (pp. 4006-4010). Stockholm, Sweden.
  17. Zhu, X., Beauregard, G. T., & Wyse, L. (2006, July). Real-time iterative spectrum inversion with look-ahead. Proceedings of the 2006 IEEE International Conference on Multimedia and Expo (pp. 229-232). Toronto, ON.