• Title/Summary/Keyword: Clips

Search Result 370, Processing Time 0.026 seconds

Sequence-based Similar Music Retrieval Scheme (시퀀스 기반의 유사 음악 검색 기법)

  • Jun, Sang-Hoon;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.13 no.2
    • /
    • pp.167-174
    • /
    • 2009
  • Music evokes human emotions or creates music moods through various low-level musical features. Typical music clip consists of one or more moods and this can be used as an important criteria for determining the similarity between music clips. In this paper, we propose a new music retrieval scheme based on the mood change patterns of music clips. For this, we first divide music clips into segments based on low level musical features. Then, we apply K-means clustering algorithm for grouping them into clusters with similar features. By assigning a unique mood symbol for each cluster, we can represent each music clip by a sequence of mood symbols. Finally, to estimate the similarity of music clips, we measure the similarity of their musical mood sequence using the Longest Common Subsequence (LCS) algorithm. To evaluate the performance of our scheme, we carried out various experiments and measured the user evaluation. We report some of the results.

  • PDF

Connecting Online Video Clips to a TV Program: Watching Online Video Clips on a TV Screen with a Related Program (인터넷 비디오콘텐츠를 관련 방송프로그램과 함께 TV환경에서 시청하기 위한 기술 및 방법에 관한 연구)

  • Cho, Jae-Hoon
    • Journal of Broadcast Engineering
    • /
    • v.12 no.5
    • /
    • pp.435-444
    • /
    • 2007
  • In this paper, we presented the concept and some methods to watch online video clips related to a TV program on atelevision which is called lean-back media, and we simulated our concept on a PC system. The key point of this research is suggesting a new service model to TV viewers and the TV industry, which the model provides simple and easy ways to watch online video clips on a TV screen. The paper defined new tags for metadata and algorithm for the model, then showed simple example using those metadata. At the end, it mentioned the usage of the model in the digital broadcasting environment and discuss about the issues which should handle as future works.

Multimodal Biometrics Recognition from Facial Video with Missing Modalities Using Deep Learning

  • Maity, Sayan;Abdel-Mottaleb, Mohamed;Asfour, Shihab S.
    • Journal of Information Processing Systems
    • /
    • v.16 no.1
    • /
    • pp.6-29
    • /
    • 2020
  • Biometrics identification using multiple modalities has attracted the attention of many researchers as it produces more robust and trustworthy results than single modality biometrics. In this paper, we present a novel multimodal recognition system that trains a deep learning network to automatically learn features after extracting multiple biometric modalities from a single data source, i.e., facial video clips. Utilizing different modalities, i.e., left ear, left profile face, frontal face, right profile face, and right ear, present in the facial video clips, we train supervised denoising auto-encoders to automatically extract robust and non-redundant features. The automatically learned features are then used to train modality specific sparse classifiers to perform the multimodal recognition. Moreover, the proposed technique has proven robust when some of the above modalities were missing during the testing. The proposed system has three main components that are responsible for detection, which consists of modality specific detectors to automatically detect images of different modalities present in facial video clips; feature selection, which uses supervised denoising sparse auto-encoders network to capture discriminative representations that are robust to the illumination and pose variations; and classification, which consists of a set of modality specific sparse representation classifiers for unimodal recognition, followed by score level fusion of the recognition results of the available modalities. Experiments conducted on the constrained facial video dataset (WVU) and the unconstrained facial video dataset (HONDA/UCSD), resulted in a 99.17% and 97.14% Rank-1 recognition rates, respectively. The multimodal recognition accuracy demonstrates the superiority and robustness of the proposed approach irrespective of the illumination, non-planar movement, and pose variations present in the video clips even in the situation of missing modalities.

An Analysis of Performers' Contribution to Entertainment Show Clips on AVOD Platform (AVOD 예능 방송 동영상 클립에 대한 실연자의 기여도 분석)

  • Ko, Jeong-Min;Choi, Yong-Seok;Jeong, Yuna;Kim, Dong-Young;Kong, Tae-Hyeon
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.8
    • /
    • pp.115-125
    • /
    • 2022
  • This study examines the effect of performers on the number of views and likes of entertainment show clips consumed on AVOD short form platform. Multiple regression analysis was performed, setting program viewing factors and performer's topicality index as independent variables, and setting the number of views and likes of clips as dependent variables. As a result of the analysis, performer's topicality index had a positive(+) effect on both dependent variables. According to standardized coefficient, on the number of views, the standardization coefficient of the performer's topicality index was the second highest, and on the number of likes it was the highest among variables. The results suggest that performers contribute a lot to the success of clips on AVOD short form platform.

Endoscopic Intervention for Anastomotic Leakage After Gastrectomy

  • Ji Yoon Kim;Hyunsoo Chung
    • Journal of Gastric Cancer
    • /
    • v.24 no.1
    • /
    • pp.108-121
    • /
    • 2024
  • Anastomotic leaks and fistulas are significant complications of gastric surgery that potentially lead to increased postoperative morbidity and mortality. Surgical intervention is reserved for cases with severe symptoms or hemodynamic instability; however, surgery carries a higher risk of complications. With advancements in endoscopic treatment options, endoscopic approaches have emerged as the primary choice for managing these complications. Endoscopic clipping is a traditional method comprising 2 main categories: through-the-scope clips and over-the-scope clips. Through-the-scope clips are user friendly and adaptable to various clinical scenarios, whereas over-the-scope clips can close larger defects. Another promising approach is endoscopic stent insertion, which has shown a high success rate for leak closure, although vigilant monitoring is required to monitor stent migration. Infection control is essential in post-surgical leakage cases, and endoscopic internal drainage provides a relatively safe and noninvasive means to manage fluids, contributing to infection control and wound healing promotion. Endoscopic suturing offers full-thickness wound closure, but requires additional training and endoscopic versatility. As a promising tool, endoscopic vacuum therapy potentially surpasses stent therapy by draining inflammatory materials and closing defects. Furthermore, the use of tissue sealants, such as fibrin glue and cyanoacrylate, has been reported to be effective in selected situations. The choice of endoscopic device should be tailored to individual cases and specific patient conditions, with careful consideration of the nature of the defect. Further extensive studies involving larger patient populations are required to provide more robust evidence on the efficacy of endoscopic approach in managing post-gastric anastomotic leaks.

Analysis of the Movement of Surgical Clips Implanted in Tumor Bed during Normal Breathing for Breast Cancer Patients (유방암 환자의 정상 호흡에서 종양에 삽입된 외과적 클립의 움직임 분석)

  • Lee, Re-Na;Chung, Eun-Ah;Suh, Hyun-Suk;Lee, Kyung-Ja;Lee, Ji-Hye
    • Radiation Oncology Journal
    • /
    • v.24 no.3
    • /
    • pp.192-200
    • /
    • 2006
  • [ $\underline{Purpose}$ ]: To evaluate the movement of surgical clips implanted in breast tumor bed during normal breathing. $\underline{Materials\;and\;Methods}$: Seven patients receiving breast post-operative radiotherapy were selected for this study. Each patient was simulated in a common treatment position. Fluoroscopic images were recorded every 0.033 s, 30 frames per 1 second, for 10 seconds in anterior to posterior (AP), lateral, and tangential direction except one patient's images which were recorded as a rate of 15 frames per second. The movement of surgical clips was recorded and measured, thereby calculated maximal displacement of each clip in AP, lateral, tangential, and superior to inferior (SI) direction. For the comparison, we also measured the movement of diaphragm in SI direction. $\underline{Results}$: From AP direction's images, average movement of surgical clips in lateral and SI direction was $0.8{\pm}0.5\;mm$ and $0.9{\pm}0.2\;mm$ and maximal movement was 1.9 mm and 1.2 mm. Surgical clips in lateral direction's images were averagely moved $1.3{\pm}0.7\;mm$ and $1.3{\pm}0.5\;mm$ in AP and SI direction with 2.6 mm and 2.6 mm maximal movement in each direction. In tangential direction's images, average movement of surgical clips and maximal movement was $1.2{\pm}0.5\;mm$ and 2.4 mm in tangential direction and $0.9{\pm}0.4\;mm$ and 1.7 mm in SI direction. Diaphragm was averagely moved $14.0{\pm}2.4\;mm$ and 18.8 mm maximally in SI direction. $\underline{Conclusion}$: The movement of clips caused by breathing was not as significant as the movement of diaphragm. And all surgical clip movements were within 3 mm in all directions. These results suggest that for breast radiotherapy, it may not necessary to use breath-holding technique or devices to control breath.

Sensibility Evaluation of Internet Shoppers with the Sportswear Rustling Sounds (스포츠의류 마찰음 정보 제공에 따른 인터넷 구매자의 감성평가)

  • Baek, Gyeong-Rang;Jo, Gil-Su
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.05a
    • /
    • pp.177-180
    • /
    • 2009
  • This study investigates the perception of different fabrics by consumers when provided with a video clip with rustling sounds of the fabric. We utilized sportswear products that are currently on the market and evaluated the emotional response of internet shoppers by measuring the physiological and psychological responses. Three kinds of vapor-permeable water-repellent fabric were selected to generate video clips each containing the fabric rustling sound and images of exercise activities wearing the sportswear made of the respective fabric. The new experimental website contained the video clips and was compared with the original website which served as a control. 30 subjects, who had experience to buy clothing online, took part in the physiological and psychological response to the video clip. Electroen-cephalography (EEG) was used to measure the physiological response while the psychological response consisted of evaluating accurate perception of the fabric, satisfaction, and consumer interest. When we offered video clips with fabric's rustling sound on the website, subjects answered they could get more accurate and rapid information to decide to purchase the products than otherwise they do the shopping without such information. However, such rustling sounds somewhat annoy customers, as proved psychological and physiological response. Our study is a critical step in evaluating the consumer's emotional response to sportswear fabric which will promote selling frequency, reduce the return rate and aid development of new sportswear fabric further evolution of the industry.

  • PDF

Generation of Video Clips Utilizing Shot Boundary Detection (샷 경계 검출을 이용한 영상 클립 생성)

  • Kim, Hyeok-Man;Cho, Seong-Kil
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.7 no.6
    • /
    • pp.582-592
    • /
    • 2001
  • Video indexing plays an important role in the applications such as digital video libraries or web VOD which archive large volume of digital videos. Video indexing is usually based on video segmentation. In this paper, we propose a software tool called V2Web Studio which can generate video clips utilizing shot boundary detection algorithm. With the V2Web Studio, the process of clip generation consists of the following four steps: 1) Automatic detection of shot boundaries by parsing the video, 2) Elimination of errors by manually verifying the results of the detection, 3) Building a modeling structure of logical hierarchy using the verified shots, and 4) Generating multiple video clips corresponding to each logically modeled segment. The aforementioned steps are performed by shot detector, shot verifier, video modeler and clip generator in the V2Web Studio respectively.

  • PDF

Development of an Expert System to Improve the Methods of Parameter Estimation (매개변수 추정방법의 개선을 위한 전문가 시스템의 개발)

  • Lee, Beom-Hui;Lee, Gil-Seong
    • Journal of Korea Water Resources Association
    • /
    • v.31 no.6
    • /
    • pp.641-655
    • /
    • 1998
  • The methods of development and application of an expert system are suggested to solve more efficiently the problems of water resources and quality induced by the rapid urbanization. Major parameters of the water quantity and quality of urban areas are selected their characteristics are presented by the sensitivity analysis. The rules to decide the parameters effectively are proposed based on these characteristics. the ESPE(Expert System for Parameter Estimation), an expert system based on the 'facts' and 'rules', is developed using the CLIPS 6.0 and applied to the basin of the An-Yang stream. The results of estimating t도 parameters of water quantity show a high applicability, but those of water quality imply the necessity of improving the present methods due to both the complexity of estimation processes and the lack of decision rules.

  • PDF

Psychophysiological Responses Evoked by Fear and Disgust Emotion Using Audiovisual Film Clips in Children (공포와 혐오 정서에 대한 아동의 심리생리반응)

  • Jang, Eun-Hye;Woo, Tae-Je;Lee, Young-Chang;Sohn, Jin-Hun
    • Science of Emotion and Sensibility
    • /
    • v.10 no.2
    • /
    • pp.273-280
    • /
    • 2007
  • The study is to examine the psychophysiological responses evoked by negative emotions(fear and disgust) in children. 47 children(11-13 years old, 23 boys) participated in the study. While the children were experiencing fear or disgust emotion induced by audio-visual film clips, ECG, EDA, PPG and SKT are measured. Emotion assessment scale was used to confirm that emotions elicited by the film clips were significantly noticeable, which was measured self-report. The results turned out to be 100% and 89.4% of appropriate for fear and disgust emotions, respectively. Emotional intensity the children had experienced was rated as 4.05, 4.07 on 1-5 scale based on effectiveness of measurement of fear and disgust emotion. ANS reponses by fear and disgust were significantly between the resting state and emotional state induced. The result obtained from the fear emotion showed significant increases in SCL, NSCR, HR, RSA, RESP and HF. There was a significant difference in SCL and NSCR between the two emotions.

  • PDF