• Title/Summary/Keyword: distinctive feature

Search Result 254, Processing Time 0.03 seconds

A Study on Disney Feature-length Drawn Animation Since Using Digital Technology (디지털 테크놀로지발전에 따른 디즈니 장편 드로잉 애니메이션 연구)

  • Park, Jae-Yoon
    • Cartoon and Animation Studies
    • /
    • s.26
    • /
    • pp.57-78
    • /
    • 2012
  • This article deals with the change of disney feature-length drawn animation since using digital technology. The spread of digital technology in the territory of traditional animation brought a massive transformation. Analogue production system for disney drawn animation was gradually replaced by digital production system and in the process, the overall quality of disney drawn animation was improved, although drawn animation industry had declined considerably due to the success of 3D computer animation. Currently, disney drawn animations not only just adopt digital technology but also use it for expressing the distinctive sensibility of the drawn animation.

Knowledge-driven speech features for detection of Korean-speaking children with autism spectrum disorder

  • Seonwoo Lee;Eun Jung Yeo;Sunhee Kim;Minhwa Chung
    • Phonetics and Speech Sciences
    • /
    • v.15 no.2
    • /
    • pp.53-59
    • /
    • 2023
  • Detection of children with autism spectrum disorder (ASD) based on speech has relied on predefined feature sets due to their ease of use and the capabilities of speech analysis. However, clinical impressions may not be adequately captured due to the broad range and the large number of features included. This paper demonstrates that the knowledge-driven speech features (KDSFs) specifically tailored to the speech traits of ASD are more effective and efficient for detecting speech of ASD children from that of children with typical development (TD) than a predefined feature set, extended Geneva Minimalistic Acoustic Standard Parameter Set (eGeMAPS). The KDSFs encompass various speech characteristics related to frequency, voice quality, speech rate, and spectral features, that have been identified as corresponding to certain of their distinctive attributes of them. The speech dataset used for the experiments consists of 63 ASD children and 9 TD children. To alleviate the imbalance in the number of training utterances, a data augmentation technique was applied to TD children's utterances. The support vector machine (SVM) classifier trained with the KDSFs achieved an accuracy of 91.25%, surpassing the 88.08% obtained using the predefined set. This result underscores the importance of incorporating domain knowledge in the development of speech technologies for individuals with disorders.

Traffic Object Tracking Based on an Adaptive Fusion Framework for Discriminative Attributes (차별적인 영상특징들에 적응 가능한 융합구조에 의한 도로상의 물체추적)

  • Kim Sam-Yong;Oh Se-Young
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.43 no.5 s.311
    • /
    • pp.1-9
    • /
    • 2006
  • Because most applications of vision-based object tracking demonstrate satisfactory operations only under very constrained environments that have simplifying assumptions or specific visual attributes, these approaches can't track target objects for the highly variable, unstructured, and dynamic environments like a traffic scene. An adaptive fusion framework is essential that takes advantage of the richness of visual information such as color, appearance shape and so on, especially at cluttered and dynamically changing scenes with partial occlusion[1]. This paper develops a particle filter based adaptive fusion framework and improves the robustness and adaptation of this framework by adding a new distinctive visual attribute, an image feature descriptor using SIFT (Scale Invariant Feature Transform)[2] and adding an automatic teaming scheme of the SIFT feature library according to viewpoint, illumination, and background change. The proposed algorithm is applied to track various traffic objects like vehicles, pedestrians, and bikes in a driver assistance system as an important component of the Intelligent Transportation System.

An Efficient Illumination Preprocessing Algorithm based on Anisotropic Smoothing for Face Recognition (얼굴 인식을 위한 Anisotropic Smoothing 기반 효율적 조명 전처리)

  • Kim, Sang-Hoon;Jung, Sou-Hwan;Cho, Seong-Won;Chung, Sun-Tae
    • The Journal of the Korea Contents Association
    • /
    • v.8 no.1
    • /
    • pp.236-245
    • /
    • 2008
  • Robust face recognition under various illumination environments is very difficult and needs to be accomplished for successful commercialization. In this paper, we propose an efficient illumination preprocessing method for face recognition. illumination preprocessing algorithm based on anisotropic smoothing is well known to be effective among illumination normalization methods but deteriorates the intensity contrast of the original image, and incurs less sharp edges. The proposed method in this paper improves the previous anisotropic smoothing based illumination normalization method so that it increases the intensity contrast and enhances the edges while diminishing effects of illumination. Due to the result of these improvements, face images preprocessed by the proposed illumination preprocessing method becomes to have more distinctive feature vectors(Gabor feature vectors). Through experiments of face recognition using Gabor jet similarity, the effectiveness of the proposed illumination preprocessing method is verified.

Region Decision Using Modified ICM Method (변형된 ICM 방식에 의한 영역판별)

  • Hwang Jae-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.37-44
    • /
    • 2006
  • In this paper, a new version of the ICM method(MICM, modified ICM) in which the contextual information is modelled by Markov random fields (MRF) is introduced. To extract the feature, a new local MRF model with a fitting block neighbourhood is proposed. This model selects contextual information not only from the relative intensity levels but also from the geometrically directional position of neighbouring cliques. Feature extraction depends on each block's contribution to the local variance. They discriminates it into several regions, for example context and background. Boundaries between these regions are also distinctive. The proposed algerian performs segmentation using directional block fitting procedure which confines merging to spatially adjacent elements and generates a partition such that pixels in unified cluster have a homogeneous intensity level. From experiment with ink rubbed copy images(Takbon, 拓本), this method is determined to be quite effective for feature identification. In particular, the new algorithm preserves the details of the images well without over- and under-smoothing problem occurring in general iterated conditional modes (ICM). And also, it may be noted that this method is applicable to the handwriting recognition.

Study of the Haar Wavelet Feature Detector for Image Retrieval (이미지 검색을 위한 Haar 웨이블릿 특징 검출자에 대한 연구)

  • Peng, Shao-Hu;Kim, Hyun-Soo;Muzzammil, Khairul;Kim, Deok-Hwan
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.1
    • /
    • pp.160-170
    • /
    • 2010
  • This paper proposes a Haar Wavelet Feature Detector (HWFD) based on the Haar wavelet transform and average box filter. By decomposing the original image using the Haar wavelet transform, the proposed detector obtains the variance information of the image, making it possible to extract more distinctive features from the original image. For detection of interest points that represent the regions whose variance is the highest among their neighbor regions, we apply the average box filter to evaluate the local variance information and use the integral image technique for fast computation. Due to utilization of the Haar wavelet transform and the average box filter, the proposed detector is robust to illumination change, scale change, and rotation of the image. Experimental results show that even though the proposed method detects fewer interest points, it achieves higher repeatability, higher efficiency and higher matching accuracy compared with the DoG detector and Harris corner detector.

A Study on the Morphological Feature of Baeja Excavated from the Tomb of Sim, Su-ryun(沈秀崙) (심수륜(沈秀崙)묘 출토 배자(背子)의 형태적 특징 고찰)

  • Lee, Young Min;Cho, Woo Hyun
    • Journal of the Korean Society of Costume
    • /
    • v.64 no.8
    • /
    • pp.55-66
    • /
    • 2014
  • Baeja(背子), which was excavated from the tomb of Sim, Su-ryun(沈秀崙, 1534-1589), a civil official, has a distinctive pattern. Two rectangles are connected by button knots on both shoulders and below the armpits, and surround the front and back of the upper body. Also, the back is shorter than the front, while the center-front is not opened. It also has a round neckline without a collar. Jeojuji(楮注紙), which is a traditional Korean paper made from mulberry bark, is put between the outer shell and lining of this clothing as an interlining. The purpose of this study is to perform a morphological feature analysis of the Baeja to examine its characteristics and name, and clothes with similar features, attire relic, pictorial and ceramic materials as well as precedent studies were used in the analysis. The Baeja, which was excavated from the tomb of Sim, Su-ryun, has the same pattern as Yangdang(裲檔), which was worn in the ancient northern region and China. Its composition and the way it was worn are very simple. Also, the shorter back length can be used as evidence that it was worn as everyday outer clothing, and not in a ceremony. Jeojuji, used as an interlining, made it easy to sew and maintain attire pattern and played a role of maintaining warmth. Therefore, this Baeja is presumed to be an outer clothing simply worn in the everyday life for convenience and warmth. In regards to its morphological feature, it was most likely a Yangdang in Joseon Dynasty.

Nonmigrating tidal characteristics in the thermospheric neutral mass density

  • Kwak, Young-Sil;Kil, Hyosub;Lee, Woo-Kyoung;Oh, Seung-Jun;Yang, Tae-Yong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.37 no.2
    • /
    • pp.125.1-125.1
    • /
    • 2012
  • The wave number 4 (wave-4) and wave number 3 (wave-3) longitudinal structures in the thermospheric neutral mass density are understood as tidal structures driven by diurnal eastward-propagating zonal wave number 3 (DE3) and wave number 2 (DE2) tides, respectively. However, those structures have been identified using data from limited time periods, and the consistency and recurrence of those structures have not yet been examined using long-term observation data. We examine the persistence of those structures by analyzing the neutral mass density data for the years 2001-2008 taken by the CHAllenging Minisatellite Payload (CHAMP) satellite. During years of low solar activity, the amplitude of the wave-4 structure is pronounced during August and September, and the wave-4 phase shows a consistent eastward phase progression of $90^{\circ}$ within 24 h local time in different months and years. During years of high solar activity, the wave-4 amplitude is small and does not show a distinctive annual pattern, but the tendency of the eastward phase shift at a rate of $90^{\circ}$/24 h exists. Thus the DE3 signature in the wave-4 structure is considered as a persistent feature. The wave-3 structure is a weak feature in most months and years. The amplitude and phase of the wave-3 structure do not show a notable solar cycle dependence. Among the contributing tidal modes to the wave-3 structure, the DE2 amplitude is most pronounced. This result may suggest that the DE2 signature, although it is a weak signature, is a perceivable persistent feature in the thermosphere.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Numerical Investigation on the Flapping Wing Sound (플래핑 날개의 음향 특성에 대한 수치 연구)

  • Bae, Young-Min;Moon, Young-J.
    • Proceedings of the KSME Conference
    • /
    • 2007.05b
    • /
    • pp.3209-3214
    • /
    • 2007
  • This study numerically investigates the unsteady flow and acoustic characteristics of a flapping wing using a hydrodynamic/acoustic splitting method. The Reynolds number based on the maximum translation velocity of the wing is Re=8800 and Mach number is M=0.0485. The flow around the flapping wing is predicted by solving the two-dimensional incompressible Navier-Stokes equations (INS) and the acoustic field is calculated by the linearized perturbed compressible equations (LPCE), both solved in moving coordinates. Numerical results show that the hovering sound is largely generated by wing translation (transverse and tangential), which have different dipole sources with different mechanisms. As a distinctive feature of the flapping sound, it is also shown that the dominant frequency varies around the wing.

  • PDF