DOI QR코드

DOI QR Code

A Simulcast System for Live Streaming and Virtual Avatar Concerts

라이브 스트리밍 및 가상 아바타 공연을 위한 동시 송출 시스템

  • Received : 2023.02.18
  • Accepted : 2023.04.03
  • Published : 2023.06.01

Abstract

Due to the recent COVID-19 pandemic, many performances have been conducted using computer-medicated communication technologies. Many performances took the form of online concerts that utilize live streaming technologies. Some artists held virtual avatar concerts on virtual reality platforms to provide a higher level of immersion and various scenes and effects. Conventional virtual avatar concerts typically require performers to wear motion capture equipment to track their movements on the fly, limiting performers' choice of outfits and making it challenging to simultaneously perform virtual and online concerts in a single performance. In this study, to overcome these limitations, we propose a simulcast system design for live streaming and virtual avatar concerts. The proposed system allows performers to wear desired costumes by tracking performers' movements using RGB-D sensors. We implemented the prototype system proposed in this study using 3 Microsoft Azure Kinects and 1 iPhone and examined the system's performance. We validated the prototype system by conducting a test performance with a famous Korean singer, Kyung-rok Kim, of the group V.O.S. Finally, given the limitations of this study, we suggested the future research directions of developing a virtual avatar concert system that allows for greater flexibility and creativity in performers' outfits and movements. We expect that the proposed system will enhance the efficiency of concert production by enabling the simultaneous broadcasting of a single concert across multiple platforms. This, in turn, will help attract a larger audience, increasing the concert's exposure and reach.

최근의 COVID-19 팬데믹으로 인하여 많은 공연이 컴퓨터 매개 통신 기술을 이용하여 수행되었다. 많은 공연이 공연자의 모습을 생중계하는 온라인 콘서트의 형태를 취했으며, 일부 아티스트들은 더욱 높은 몰입감과 다양한 연출을 선보이기 위해 가상 현실 플랫폼을 활용한 가상 아바타 콘서트를 진행하기도 했다. 일반적인 가상 아바타 콘서트는 공연자의 움직임을 실시간으로 추적하기 위해 불가피하게 모션캡쳐 장비의 착용을 요구하는데, 이는 하나의 공연을 가상 아바타 콘서트와 온라인 콘서트로 동시에 수행하기 어렵게 만드는 장애물로 작용했다. 본 연구에서는 공연자의 의상에 제약을 주는 기존 가상 아바타 공연의 한계를 극복하기 위해 RGB-D 센서를 이용하여 공연자의 움직임을 추적함으로써 가상 아바타 콘서트와 온라인 콘서트를 동시에 수행할 수 있는 시스템을 제안한다. 우리는 Microsoft Azure Kinect 3대와 iPhone 1대를 이용하여 제안된 시스템의 프로토타입을 구현하고, 시스템의 성능을 측정하였다. 또한, 국내 유명 아티스트, 그룹 V.O.S의 김경록과 함께 시험 공연을 시행하여 본 연구의 프로토타입 시스템을 검증하였다. 마지막으로, 본 연구에서 확인한 한계점들을 바탕으로 공연자의 의상과 움직임에 있어 더욱 유연하고 창의성을 발휘할 수 있는 가상 아바타 콘서트 시스템의 향후 연구 방향을 제언하였다. 우리는 본 연구의 시스템이 하나의 공연을 여러 플랫폼에서 동시에 활용할 수 있도록 하여 공연 제작의 효율성을 높이고, 다양한 공연 콘텐츠를 제작하여 더 많은 관객을 모집하는 데 도움이 될 수 있을 것으로 기대한다.

Keywords

Acknowledgement

본 공연의 가상공간 구현에 도움을 주신 지명구 대표님, 박정운감독님을 비롯하여 IOFX 임직원에게 감사의 말씀을 드립니다. 본 연구는 문화체육관광부 및 한국콘텐츠진흥원의 2022년도 문화체육관광연구개발사업으로 수행되었음(과제명: 5G 기반 실시간 자유 시점 원격 관람이 가능한 다채널 콘텐츠 제작 프로덕션 플랫폼 기술 개발, 과제번호: R2020040275, 기여율: 100%)

References

  1. Beyondlive. [Online]. Available: https://beyondlive.com
  2. Lakus. [Online]. Available: https://www.lakus.live
  3. J. Choi and J. Lee, "Analysis of chat interactions in online idol performances," in Proceedings of HCI Korea 2023, 2023, pp. 291-296.
  4. J. Bieber. (2021) Wave presents: Justin bieber - an interactive virtual experience. [Online]. Available: https://www.youtube.com/watch?v=UAhGvhvcoyY
  5. B. Carlton. (2020) John legend performs on wave to raise awareness towards mass incarceration. [Online]. Available: https://vrscout.com/news/john-legend-live-vr-concert-wave/
  6. J. Aswad. (2021) Justin bieber to stage interactive virtual concert with wave. [Online]. Available: https://variety.com/2021/digital/news/justinbieber-interactive-virtual-concert-wave-1235108070/
  7. TheWaveXR. (2021) Behind the battle - pentakill: The lost chapter interactive album experience. [Online]. Available: https://www.youtube.com/watch?v=H-qNxQPvGWU
  8. J. Lanier, The sound of one hand. New Whole Earth, LLC, 1993, vol. 79, pp. 30-35.
  9. C. W. Sul, K. C. Lee, and K. Wohn, "Virtual stage: a locationbased karaoke system," IEEE MultiMedia, vol. 5, no. 2, pp. 42-52, 1998. https://doi.org/10.1109/93.682524
  10. W. S. Meador, T. J. Rogers, K. O'Neal, E. Kurt, and C. Cunningham, "Mixing dance realities: Collaborative development of live-motion capture in a performing arts environment," Comput. Entertain., vol. 2, no. 2, p. 12, 2004.
  11. 류종화. (2012) 아이유, 아이온에서 단독 라이브 콘서트 펼친다. [Online]. Available: https://www.gamemeca.com/view.php?gid=257268
  12. H. McIntyre. (2021) Bts's latest 'bang bang con' was their biggest yet. [Online]. Available: https://www.forbes.com/sites/hughmcintyre/2021/04/19/btsslatest-bang-bang-con-was-their-biggestyet/?sh=388de91f2977
  13. R. Aniftos. (2020) Blackpink announces 'the show' global livestream concert experience. [Online]. Avail- able: https://www.billboard.com/music/pop/blackpink-the- showglobal-livestream-concert-9493117/
  14. Billboard. (2020) Travis scott's 'fortnite' in-game concert 'astronomical' garners 12.3m viewers - billboard news. [Online]. Available: https://www.billboard.com/video/travisscotts-fortnite-in-game-concert-astronomical-garners-12- 3m-viewers-billboard-news/
  15. Z. Zhang, "Microsoft kinect sensor and its effect," IEEE MultiMedia, vol. 19, no. 2, pp. 4-10, 2012. https://doi.org/10.1109/MMUL.2012.24
  16. Z. Marquardt, J. a. Beira, N. Em, I. Paiva, and S. Kox, "Super mirror: A kinect interface for ballet dancers," in CHI '12 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA '12. New York, NY, USA: Association for Computing Machinery, 2012, p. 1619-1624.
  17. Q. Wang, P. Turaga, G. Coleman, and T. Ingalls, "Somatech: An exploratory interface for altering movement habits," in CHI '14 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA '14. New York, NY, USA: Association for Computing Machinery, 2014, p. 1765-1770.
  18. D. G. Rodrigues, E. Grenader, F. d. S. Nos, M. d. S. Dall'Agnol, T. E. Hansen, and N. Weibel, "Motiondraw: A tool for enhancing art and performance using kinect," in CHI'13 Extended Abstracts on Human Factors in Computing Sys- tems, ser. CHI EA '13. New York, NY, USA: Association for Computing Machinery, 2013, p. 1197-1202.
  19. S. I. Park, "Motion correction captured by kinect based on synchronized motion database," Journal of the Korea Computer Graphics Society, vol. 23, no. 2, pp. 41-47, 2017. https://doi.org/10.15701/KCGS.2017.23.2.41
  20. S.-h. Lee, D.-W. Lee, K. Jun, W. Lee, and M. S. Kim, "Markerless 3d skeleton tracking algorithm by merging multiple inaccurate skeleton data from multiple rgb-d sensors," Sensors, vol. 22, no. 9, p. 3155, 2022.
  21. J. Kim, D. Kang, Y. Lee, and T. Kwon, "Real-time interactive animation system for low-priced motion capture sensors," Journal of the Korea Computer Graphics Society, vol. 28, no. 2, pp. 29-41, 2022. https://doi.org/10.15701/kcgs.2022.28.2.29
  22. H. W. Byun, "Interactive vfx system for tv virtual studio," Journal of the Korea Computer Graphics Society, vol. 21, no. 5, pp. 21-27, 2015. https://doi.org/10.15701/KCGS.2015.21.5.21
  23. Apple Developer. Face tracking with arkit. [Online]. Available: https://developer.apple.com/videos/play/tech-talks/601/
  24. M. T. Tang, V. L. Zhu, and V. Popescu, "Alterecho: Loose avatar-streamer coupling for expressive vtubing," in 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021, pp. 128-137.
  25. Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multiperson 2d pose estimation using part affinity fields," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1302-1310.
  26. Y. Wang, S. Hou, B. Ning, and W. Liang, "Photo standout: Photography with virtual character," in Proceedings of the 28th ACM International Conference on Multimedia, ser. MM '20. Association for Computing Machinery, 2020, p. 781-788. 
  27. K. Umetsu, N. Kubota, and J. Woo, "Effects of the audience robot on robot interactive theater considering the state of audiences," in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 1430-1434.
  28. W. Song, X. Wang, Y. Gao, A. Hao, and X. Hou, "Real-time expressive avatar animation generation based on monocular videos," in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022, pp. 429-434.
  29. M. Tolgyessy, M. Dekan, and L. Chovanec, "Skeleton tracking accuracy and precision evaluation of kinect v1, kinect v2, and the azure kinect," Applied Sciences, vol. 11, no. 12, 2021. [Online]. Available: https://www.mdpi.com/2076- 3417/11/12/5756
  30. Opus codec. [Online]. Available: https://opus-codec.org
  31. gRPC. [Online]. Available: https://grpc.io
  32. M. Jang, S. Jung, and J. Noh, "Speech animation synthesis based on a korean co-articulation model," Journal of the Korea Computer Graphics Society, vol. 26, no. 3, pp. 49-59, 2020. [Online]. Available: https://doi.org/10.15701/kcgs.2020.26.3.49
  33. T. Karras, T. Aila, S. Laine, A. Herva, and J. Lehtinen, "Audio-driven facial animation by joint end-toend learning of pose and emotion," ACM Trans. Graph., vol. 36, no. 4, jul 2017. [Online]. Available: https://doi.org/10.1145/3072959.3073658