• Title/Summary/Keyword: performance video

Search Result 2,485, Processing Time 0.031 seconds

A Study on the Transmission of 'Soeburi-Song' in Ulsan (울산쇠부리소리의 전승 양상)

  • Yang, Young-Jin
    • (The) Research of the performance art and culture
    • /
    • no.37
    • /
    • pp.157-186
    • /
    • 2018
  • Ulsan Soeburi song was reenacted in the 1980s based on the testimony and songs of late Choi Jae man (1987 death), the last blacksmith of the iron production plant at Dalcheon dong, Ulsan in August 1981. The purpose of this study is to analyze Soeburi song from the musical perspective based on 13 kinds of data including video in 1981, and confirm the changing patterns in the tradition process. The derived results are summarized as follows. In the results of examining Soeburi Song data in 2017, the percussion instruments consist of kkwaenggwari 2 (leading small gong 1, follow small gong 1), jing 2 (large gong 2), buk 4 (drum 4), janggu 4 (double headed drum 4), taepyongso 1 (Korean shawm 1), and Jangdan (rhythm) consists of five such as Jilgut, Jajinmori, Dadeuraegi, Deotbaegi, Jajin Deotbaegi. The vocal songs are sung accompanied by the Deotbaegi Jajin Deotbaegi (beat) of quarter small triplet time, or without accompaniment. The scale is mostly Mi La do's third note or Mi La do re's fourth note, and the range does not exceed one octave. All the cadence tones are the same as La. From the results of observing Soeburi song performance until today after the excavation in 1981, it is found that there are four major changes. First, the composition of the music is differentiated into 'long Jajin (slow fast)', and , , and are added. Second, the singing method is based on 'single cantor + multi post singers' since 1980's reenactment, and a single post singer was also specified from time to time. In addition, , which was performed in 2013, became the foundation of . Third, a melodic change of was observed. All beat structures are quarter small triplet time, but the speed gets slow, Mi La do's three notes are skeletonized to be corrected with high re and low sol, and then the characteristics of Menari tori (the mode appeared in the eastern province of the Korean peninsula) are to be clear. Lastly, the four percussion instruments such as kkwaenggwari, jing, janggu, and buk are frequently used, and depending on the performance, sogo (hand drum), taepyongso, yoryeong (bell) are also added. Jangdan played Jajinmori, Dadeuraegi, Deotbaegi and Jajin Deotbaegi from the beginning, and thereafter, the Jilgut Jangdan was added. Through these results as above, it is confirmed that at the time of the first excavation, a simple form of such as has been changed into a male labor song, the purpose of which has changed, and that the playability has become stronger and changed into a performing arts.

H.264/SVC Spatial Scalability Coding based Terrestrial Multi-channel Hybrid HD Broadcasting Service Framework and Performance Analysis on H.264/SVC (H.264/SVC 공간 계위 부호화 기반 지상파 다채널 하이브리드 고화질 방송 서비스 프레임워크 및 H.264/SVC 부호화 성능 평가)

  • Kim, Dae-Eun;Lee, Bum-Shik;Kim, Mun-Churl;Kim, Byung-Sun;Hahm, Sang-Jin;Lee, Keun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.17 no.4
    • /
    • pp.640-658
    • /
    • 2012
  • One of the existing terrestrial multi-channel DTV service frameworks, called KoreaView, provides four programs, composed of MPEG-2 based one HD video and H.264/AVC based three SD videos within one single 6MHz frequency bandwidth. However the additional 3 SD videos can not provide enough quality due to its reduced spatial resolution and low target bitrates. In this paper, we propose a framework, which is called a terrestrial multi-channel high quality hybrid DTV service, to overcome such a weakness of KoreaView services. In the proposed framework, the three additional SD videos are encoded based on an H.264/SVC Spatial Base layer, which is compliant with H.264/AVC, and are delivered via broadcasting networks. On the other hand, and the corresponding three additional HD videos are encoded based on an H.264/SVC Spatial Enhancement layer, which are transmitted over broadband networks such as Internet, thus allowing the three additional videos for users with better quality of experience. In order to verify the effectiveness of the proposed framework, various experimental results are provided for real video contents being used for DTV services. First, the experimental results show that, when the SD sequences are encoded by the H.264/SVC Spatial Base layer at a target bitrate of 1.5Mbps, the resulting PSNR values are ranged from 34.5dB to 42.9dB, which is a sufficient level of service quality. Also it is noted that 690kbps-8,200kbps are needed for the HD test sequences when they are encoded in the H.264/SVC Spatial Enhancement layer at similar PSNR values for the same HD sequences encoded by MPEG-2 at a target bitrate of 12 Mbps.

A Study on Facial Expression Acting in Genre Drama - with Focus on K-Drama Voice2 - (장르 드라마에서의 표정연기연구 - 드라마 '보이스2'를 중심으로 -)

  • Oh, Youn-Hong
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.313-323
    • /
    • 2019
  • For the actors on video, facial expression acting can easily become 'forced facial expression' or 'over-acting'. Also, if self-restraint is emphasized too much, then it becomes 'flat acting' with insufficient emotions. By bringing forth questions in regard to such facial expression acting methods, this study analyzed the facial expression acting of the actors in genre dramas with strong commercial aspects. In conclusion, the facial expression acting methods of the actors in genre dramas were being conducted in a typical way. This means that in visual conventions of video acting, the aesthetic standard has become the important standard in the facial expression acting of the actors. In genre dramas, the emotions of the characters are often revealed in close-up shots. Within the close-up shot, the most important expressive medium in a 'zoomed-in face' is the 'pupil of the eye', and emotions are mostly expressed through the movements of the eye and muscles around it. The second most important expressive medium is the 'mouth'. The differences in the degree of opening and closing the mouth convey diverse emotions along with the expression of the 'eye'. In addition, tensions in the facial muscles greatly hinder the expression of emotions, and the movement of facial muscles must be minimized to prevent excessive wrinkles from forming on the surface of the face. Facial expressions are not completed just with the movement of the muscles. Ultimately, the movement of the muscle is the result of emotions. Facial expression acting takes place after having emotional feelings. For this, the actor needs to go through the process of 'personalization' of a character, such as 'emotional memory', 'concentration' and 'relaxation' which are psychological acting techniques of Stanislavsky. Also, the characteristics of close-up shots that visually reveal the 'inner world' should be recognized. In addition, it was discovered that the facial expression acting is the reaction acting that provides the important point in the unfolding of narratives, and that the method of facial expression and the size of the shots required for the actors are different depending on the roles of main and supporting characters.

A Study on 'Seungininsangmu' of Haejugwonbeon (<성인인상무>에 대한 연구)

  • Kim, Young-Hee;Kim, Kyung-Sook
    • (The) Research of the performance art and culture
    • /
    • no.35
    • /
    • pp.93-123
    • /
    • 2017
  • The Buddhist dance, which is considered to be the essence of Korean folk dance, has changed and developed over many years, having profound influential relations with Buddhism in terms of its origin, source, title, and costumes. Today the Buddhist dance is performed in two fixed types, Jangsam dance and Buk dance, but it is estimated that there must have been various forms of Buddhist dance during the Japanese rule based on the its historicity and various origination theories. It was around 1940 that Jang Yang-seon, the master of Haejugwonbeon, turned 'Seungininsangmu' into a work through Yang So-woon. The present study analyzed the video of 'Seungininsangmu' performed at the 'Performance in the Memory of Yang So-woon' in 2010, and the analysis results were as follows: first, the dance has a clear message to be delivered in its title and connotes an origination theory of Buddhist dance, which argues that the Buddhist dance was created by a Buddhist that underwent agony and corruption during his ascetic practice and later returned to Buddhism. Secondly, the process of Jangsam dance - Buknori - Bara dance - Heoteun dance - Hoisimgok - Guiui shows the thematic consciousness of the dance clearly in a sequential manner. Finally, the dance was in a form of combining various expressive methods according to the story and its development including the Bara dance, a dance performed in a Buddhist ceremony, the Heoteun dance, which is strongly characterized by individuality and spontaneity that are folk features, and Hoisimgok, the Buddhist music. Those findings indicate that the dance reflected well the flow of putting the Buddhist dance on the stage or turning it into a work in the early 20th century. Compared with the types of Buddhist dance in a strong form including the Jangsam dance and Buk dance, 'Seungininsangmu' conveys the meanings that the original Buddhist dance tried to express in terms of content and reflects on the diversity of combined Akgamu and theatrical elements in terms of form. The present study is significant in that it offers many implications for the Buddhist dance capable of future-oriented development.

Evaluation of Robustness of Deep Learning-Based Object Detection Models for Invertebrate Grazers Detection and Monitoring (조식동물 탐지 및 모니터링을 위한 딥러닝 기반 객체 탐지 모델의 강인성 평가)

  • Suho Bak;Heung-Min Kim;Tak-Young Kim;Jae-Young Lim;Seon Woong Jang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.297-309
    • /
    • 2023
  • The degradation of coastal ecosystems and fishery environments is accelerating due to the recent phenomenon of invertebrate grazers. To effectively monitor and implement preventive measures for this phenomenon, the adoption of remote sensing-based monitoring technology for extensive maritime areas is imperative. In this study, we compared and analyzed the robustness of deep learning-based object detection modelsfor detecting and monitoring invertebrate grazersfrom underwater videos. We constructed an image dataset targeting seven representative species of invertebrate grazers in the coastal waters of South Korea and trained deep learning-based object detection models, You Only Look Once (YOLO)v7 and YOLOv8, using this dataset. We evaluated the detection performance and speed of a total of six YOLO models (YOLOv7, YOLOv7x, YOLOv8s, YOLOv8m, YOLOv8l, YOLOv8x) and conducted robustness evaluations considering various image distortions that may occur during underwater filming. The evaluation results showed that the YOLOv8 models demonstrated higher detection speed (approximately 71 to 141 FPS [frame per second]) compared to the number of parameters. In terms of detection performance, the YOLOv8 models (mean average precision [mAP] 0.848 to 0.882) exhibited better performance than the YOLOv7 models (mAP 0.847 to 0.850). Regarding model robustness, it was observed that the YOLOv7 models were more robust to shape distortions, while the YOLOv8 models were relatively more robust to color distortions. Therefore, considering that shape distortions occur less frequently in underwater video recordings while color distortions are more frequent in coastal areas, it can be concluded that utilizing YOLOv8 models is a valid choice for invertebrate grazer detection and monitoring in coastal waters.

A Calibration-Free 14b 70MS/s 0.13um CMOS Pipeline A/D Converter with High-Matching 3-D Symmetric Capacitors (높은 정확도의 3차원 대칭 커패시터를 가진 보정기법을 사용하지 않는 14비트 70MS/s 0.13um CMOS 파이프라인 A/D 변환기)

  • Moon, Kyoung-Jun;Lee, Kyung-Hoon;Lee, Seung-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.12 s.354
    • /
    • pp.55-64
    • /
    • 2006
  • This work proposes a calibration-free 14b 70MS/s 0.13um CMOS ADC for high-performance integrated systems such as WLAN and high-definition video systems simultaneously requiring high resolution, low power, and small size at high speed. The proposed ADC employs signal insensitive 3-D fully symmetric layout techniques in two MDACs for high matching accuracy without any calibration. A three-stage pipeline architecture minimizes power consumption and chip area at the target resolution and sampling rate. The input SHA with a controlled trans-conductance ratio of two amplifier stages simultaneously achieves high gain and high phase margin with gate-bootstrapped sampling switches for 14b input accuracy at the Nyquist frequency. A back-end sub-ranging flash ADC with open-loop offset cancellation and interpolation achieves 6b accuracy at 70MS/s. Low-noise current and voltage references are employed on chip with optional off-chip reference voltages. The prototype ADC implemented in a 0.13um CMOS is based on a 0.35um minimum channel length for 2.5V applications. The measured DNL and INL are within 0.65LSB and l.80LSB, respectively. The prototype ADC shows maximum SNDR and SFDR of 66dB and 81dB and a power consumption of 235mW at 70MS/s. The active die area is $3.3mm^2$.

A Study on an Open/Closed Eye Detection Algorithm for Drowsy Driver Detection (운전자 졸음 검출을 위한 눈 개폐 검출 알고리즘 연구)

  • Kim, TaeHyeong;Lim, Woong;Sim, Donggyu
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.53 no.7
    • /
    • pp.67-77
    • /
    • 2016
  • In this paper, we propose an algorithm for open/closed eye detection based on modified Hausdorff distance. The proposed algorithm consists of two parts, face detection and open/closed eye detection parts. To detect faces in an image, MCT (Modified Census Transform) is employed based on characteristics of the local structure which uses relative pixel values in the area with fixed size. Then, the coordinates of eyes are found and open/closed eyes are detected using MHD (Modified Hausdorff Distance) in the detected face region. Firstly, face detection process creates an MCT image in terms of various face images and extract criteria features by PCA(Principle Component Analysis) on offline. After extraction of criteria features, it detects a face region via the process which compares features newly extracted from the input face image and criteria features by using Euclidean distance. Afterward, the process finds out the coordinates of eyes and detects open/closed eye using template matching based on MHD in each eye region. In performance evaluation, the proposed algorithm achieved 94.04% accuracy in average for open/closed eye detection in terms of test video sequences of gray scale with 30FPS/$320{\times}180$ resolution.

Amplitude Panning Algorithm for Virtual Sound Source Rendering in the Multichannel Loudspeaker System (다채널 스피커 환경에서 가상 음원을 생성하기 위한 레벨 패닝 알고리즘)

  • Jeon, Se-Woon;Park, Young-Cheol;Lee, Seok-Pil;Youn, Dae-Hee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.4
    • /
    • pp.197-206
    • /
    • 2011
  • In this paper, we proposes the virtual sound source panning algorithm in the multichannel system. Recently, High-definition (HD) and Ultrahigh-definition (UHD) video formats are accepted for the multimedia applications and they provide the high-quality resolution pixels and the wider view angle. The audio format also needs to generate the wider sound field and more immersive sound effects. However, the conventional stereo system cannot satisfy the desired sound quality in the latest multimedia system. Therefore, the various multichannel systems that can make more improved sound field generation are proposed. In the mutichannel system, the conventional panning algorithms have acoustic problems about directivity and timbre of the virtual sound source. To solve these problems in the arbitrary positioned multichannel loudspeaker system, we proposed the virtual sound source panning algorithm using multiple vectors base nonnegative amplitude panning gains. The proposed algorithm can be easily controlled by the gain control function to generate an accurate localization of the virtual sound source and also it is available for the both symmetric and asymmetric loudspeakers format. Its performance of sound localization is evaluated by subjective tests comparing with conventional amplitude panning algorithms, e.g. VBAP and MDAP, in the symmetric and asymmetric formats.

Estimation of Fractional Frequency Offset for the Next-Generation Digital Broadcasting System (차세대 디지털 방송시스템을 위한 소수배 주파수 오프셋 추정)

  • Kim, Ho Jae;Kang, In-Woong;Seo, Jae Hyun;Kim, Heung Mook;Kim, Hyoung-Nam
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.41 no.11
    • /
    • pp.1364-1373
    • /
    • 2016
  • Ultra High Definition Television (UHDTV) has attracted much attention as one of next generation broadcasting services. For the commercialization of UHD broadcasting service, standardization groups including the DVB (Digital Video Broadcasting) and the ATSC (Advanced Television Systems Committee) have decided to adopt the Orthogonal Frequency Division Multiplexing (OFDM) for signal transmission. However, when the carrier frequency is not properly synchronized at the receiver, inter-symbol interference (ISI) and inter-carrier interference (ICI) may occur. In order to avoid performance degradation resulting from ISI or ICI, receivers should synchronize the carrier frequency by using preambles and pilot symbols. However, there only few published literature dealing with the frequency offset estimation methods regarding the next generation terrestrial broadcasting. In this respect, this paper proposes a method to estimate timing and fractional frequency offset for the ATSC 3.0 system by using a preamble bootstrap symbol. The proposed detector can detect the fractional frequency offset by adding a complex conjugate product on the conventional estimator where only timing offset can be estimated.

Illumination Environment Adaptive Real-time Video Surveillance System for Security of Important Area (중요지역 보안을 위한 조명환경 적응형 실시간 영상 감시 시스템)

  • An, Sung-Jin;Lee, Kwan-Hee;Kwon, Goo-Rak;Kim, Nam-Hyung;Ko, Sung-Jea
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.2 s.314
    • /
    • pp.116-125
    • /
    • 2007
  • In this paper, we propose a illumination environment adaptive real-time surveillance system for security of important area such as military bases, prisons, and strategic infra structures. The proposed system recognizes movement of objects on the bright environments as well as in dark illumination. The procedure of proposed system may be summarized as follows. First, the system discriminates between bright and dark with input image distribution. Then, if the input image is dark, the system has a pre-processing. The Multi-scale Retinex Color Restoration(MSRCR) is processed to enhance the contrast of image captured in dark environments. Secondly, the enhanced input image is subtracted with the revised background image. And then, we take a morphology image processing to obtain objects correctly. Finally, each bounding box enclosing each objects are tracked. The center point of each bounding box obtained by the proposed algorithm provides more accurate tracking information. Experimental results show that the proposed system provides good performance even though an object moves very fast and the background is quite dark.