Acknowledgement
본 공연의 가상공간 구현에 도움을 주신 지명구 대표님, 박정운감독님을 비롯하여 IOFX 임직원에게 감사의 말씀을 드립니다. 본 연구는 문화체육관광부 및 한국콘텐츠진흥원의 2022년도 문화체육관광연구개발사업으로 수행되었음(과제명: 5G 기반 실시간 자유 시점 원격 관람이 가능한 다채널 콘텐츠 제작 프로덕션 플랫폼 기술 개발, 과제번호: R2020040275, 기여율: 100%)
References
- Beyondlive. [Online]. Available: https://beyondlive.com
- Lakus. [Online]. Available: https://www.lakus.live
- J. Choi and J. Lee, "Analysis of chat interactions in online idol performances," in Proceedings of HCI Korea 2023, 2023, pp. 291-296.
- J. Bieber. (2021) Wave presents: Justin bieber - an interactive virtual experience. [Online]. Available: https://www.youtube.com/watch?v=UAhGvhvcoyY
- B. Carlton. (2020) John legend performs on wave to raise awareness towards mass incarceration. [Online]. Available: https://vrscout.com/news/john-legend-live-vr-concert-wave/
- J. Aswad. (2021) Justin bieber to stage interactive virtual concert with wave. [Online]. Available: https://variety.com/2021/digital/news/justinbieber-interactive-virtual-concert-wave-1235108070/
- TheWaveXR. (2021) Behind the battle - pentakill: The lost chapter interactive album experience. [Online]. Available: https://www.youtube.com/watch?v=H-qNxQPvGWU
- J. Lanier, The sound of one hand. New Whole Earth, LLC, 1993, vol. 79, pp. 30-35.
- C. W. Sul, K. C. Lee, and K. Wohn, "Virtual stage: a locationbased karaoke system," IEEE MultiMedia, vol. 5, no. 2, pp. 42-52, 1998. https://doi.org/10.1109/93.682524
- W. S. Meador, T. J. Rogers, K. O'Neal, E. Kurt, and C. Cunningham, "Mixing dance realities: Collaborative development of live-motion capture in a performing arts environment," Comput. Entertain., vol. 2, no. 2, p. 12, 2004.
- 류종화. (2012) 아이유, 아이온에서 단독 라이브 콘서트 펼친다. [Online]. Available: https://www.gamemeca.com/view.php?gid=257268
- H. McIntyre. (2021) Bts's latest 'bang bang con' was their biggest yet. [Online]. Available: https://www.forbes.com/sites/hughmcintyre/2021/04/19/btsslatest-bang-bang-con-was-their-biggestyet/?sh=388de91f2977
- R. Aniftos. (2020) Blackpink announces 'the show' global livestream concert experience. [Online]. Avail- able: https://www.billboard.com/music/pop/blackpink-the- showglobal-livestream-concert-9493117/
- Billboard. (2020) Travis scott's 'fortnite' in-game concert 'astronomical' garners 12.3m viewers - billboard news. [Online]. Available: https://www.billboard.com/video/travisscotts-fortnite-in-game-concert-astronomical-garners-12- 3m-viewers-billboard-news/
- Z. Zhang, "Microsoft kinect sensor and its effect," IEEE MultiMedia, vol. 19, no. 2, pp. 4-10, 2012. https://doi.org/10.1109/MMUL.2012.24
- Z. Marquardt, J. a. Beira, N. Em, I. Paiva, and S. Kox, "Super mirror: A kinect interface for ballet dancers," in CHI '12 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA '12. New York, NY, USA: Association for Computing Machinery, 2012, p. 1619-1624.
- Q. Wang, P. Turaga, G. Coleman, and T. Ingalls, "Somatech: An exploratory interface for altering movement habits," in CHI '14 Extended Abstracts on Human Factors in Computing Systems, ser. CHI EA '14. New York, NY, USA: Association for Computing Machinery, 2014, p. 1765-1770.
- D. G. Rodrigues, E. Grenader, F. d. S. Nos, M. d. S. Dall'Agnol, T. E. Hansen, and N. Weibel, "Motiondraw: A tool for enhancing art and performance using kinect," in CHI'13 Extended Abstracts on Human Factors in Computing Sys- tems, ser. CHI EA '13. New York, NY, USA: Association for Computing Machinery, 2013, p. 1197-1202.
- S. I. Park, "Motion correction captured by kinect based on synchronized motion database," Journal of the Korea Computer Graphics Society, vol. 23, no. 2, pp. 41-47, 2017. https://doi.org/10.15701/KCGS.2017.23.2.41
- S.-h. Lee, D.-W. Lee, K. Jun, W. Lee, and M. S. Kim, "Markerless 3d skeleton tracking algorithm by merging multiple inaccurate skeleton data from multiple rgb-d sensors," Sensors, vol. 22, no. 9, p. 3155, 2022.
- J. Kim, D. Kang, Y. Lee, and T. Kwon, "Real-time interactive animation system for low-priced motion capture sensors," Journal of the Korea Computer Graphics Society, vol. 28, no. 2, pp. 29-41, 2022. https://doi.org/10.15701/kcgs.2022.28.2.29
- H. W. Byun, "Interactive vfx system for tv virtual studio," Journal of the Korea Computer Graphics Society, vol. 21, no. 5, pp. 21-27, 2015. https://doi.org/10.15701/KCGS.2015.21.5.21
- Apple Developer. Face tracking with arkit. [Online]. Available: https://developer.apple.com/videos/play/tech-talks/601/
- M. T. Tang, V. L. Zhu, and V. Popescu, "Alterecho: Loose avatar-streamer coupling for expressive vtubing," in 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2021, pp. 128-137.
- Z. Cao, T. Simon, S.-E. Wei, and Y. Sheikh, "Realtime multiperson 2d pose estimation using part affinity fields," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 1302-1310.
- Y. Wang, S. Hou, B. Ning, and W. Liang, "Photo standout: Photography with virtual character," in Proceedings of the 28th ACM International Conference on Multimedia, ser. MM '20. Association for Computing Machinery, 2020, p. 781-788.
- K. Umetsu, N. Kubota, and J. Woo, "Effects of the audience robot on robot interactive theater considering the state of audiences," in 2019 IEEE Symposium Series on Computational Intelligence (SSCI), 2019, pp. 1430-1434.
- W. Song, X. Wang, Y. Gao, A. Hao, and X. Hou, "Real-time expressive avatar animation generation based on monocular videos," in 2022 IEEE International Symposium on Mixed and Augmented Reality Adjunct (ISMAR-Adjunct), 2022, pp. 429-434.
- M. Tolgyessy, M. Dekan, and L. Chovanec, "Skeleton tracking accuracy and precision evaluation of kinect v1, kinect v2, and the azure kinect," Applied Sciences, vol. 11, no. 12, 2021. [Online]. Available: https://www.mdpi.com/2076- 3417/11/12/5756
- Opus codec. [Online]. Available: https://opus-codec.org
- gRPC. [Online]. Available: https://grpc.io
- M. Jang, S. Jung, and J. Noh, "Speech animation synthesis based on a korean co-articulation model," Journal of the Korea Computer Graphics Society, vol. 26, no. 3, pp. 49-59, 2020. [Online]. Available: https://doi.org/10.15701/kcgs.2020.26.3.49
- T. Karras, T. Aila, S. Laine, A. Herva, and J. Lehtinen, "Audio-driven facial animation by joint end-toend learning of pose and emotion," ACM Trans. Graph., vol. 36, no. 4, jul 2017. [Online]. Available: https://doi.org/10.1145/3072959.3073658