• Title/Summary/Keyword: time domain data

Search Result 1,309, Processing Time 0.03 seconds

A Priority of Documentary Films' Success Factor: AHP Analysis (AHP기법을 통한 다큐멘터리 영화 성공요인 중요도 분석)

  • Im, So-Yeon;Lee, Yun-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.12
    • /
    • pp.644-657
    • /
    • 2017
  • A golden age for Korean documentary films has begun in the 2010s. At present, the success of documentary film is attracting interest than any time in history. To meet the timely demand, the present study conducted research on success factors of documentary films for the first time. This study adopted the three categories of Litman's film success factors (creative domain, distribution domain, and marketing domain) and integrated to the cases of Korean documentary films to extract 12 success factors of documentary films through experts' validation process. Moreover, this study applied AHP method and examined the relative importance of success factors of documentary films. The result was as follows. The significant success factors mostly were found within creative domain. Documentaries that feature common themes rated the distribution/marketing factors highly whereas documentaries that cover socio-political themes rated highly the governments funding and infrastructure building. With regard to the online phenomenon of participating viewer, there was a gap in perception: the production field rated it high while the distribution/marketing field rated it relatively low. These success factors and priority will be used as baseline data in diverse fields related to documentary films, and provide significant implications to documentary film staff, policy makers, and researchers.

The Principles and Practice of Induced Polarization Method (유도분극 탐사의 원리 및 활용)

  • Kim, Bitnarae;Nam, Myung Jin;Jang, Hannuree;Jang, Hangilro;Son, Jeong-Sul;Kim, Hee Jun
    • Geophysics and Geophysical Exploration
    • /
    • v.20 no.2
    • /
    • pp.100-113
    • /
    • 2017
  • Induced polarization (IP) method is based on the measurement of a polarization effect known as overvoltage of the ground. IP techniques have been usually used to find mineral deposits, however, nowadays widely applied to hydrogeological investigations, surveys of groundwater pollution and foundation studies on construction sites. IP surveys can be classified by its source type, i.e., time-domain IP estimating chargeability, frequency-domain IP measuring frequency effect (FE), and complex resistivity (CR) and spectral IP (SIP) measuring complex resistivity. Recently, electromagnetic-based IP has been studied to avoid the requirement for spike electrodes to be placed in the ground. In order to understand IP methods in this study, we: 1) classify IP surveys by source type and measured data and illustrate their basic theories, 2) describe historical development of each IP forward modeling and inversion algorithm, and finally 3) introduce various case studies of IP measurements.

A New Algorithm for Extracting Fetal ECG from Multi-Channel ECG using Singular Value Decomposition in a Discrete Cosine Transform Domain (산모의 다채널 심전도 신호로부터 이산여현변환영역에서 특이값 분해를 이용한 태아 심전도 분리 알고리듬)

  • Song In-Ho;Lee Sang-Min;Kim In-Young;Lee Doo-Soo;Kim Sun I.
    • Journal of Biomedical Engineering Research
    • /
    • v.25 no.6
    • /
    • pp.589-598
    • /
    • 2004
  • We propose a new algorithm to extract the fetal electrocardiogram (FECG) from a multi-channel electrocardiogram (ECG) recorded at the chest and abdomen of a pregnant woman. To extract the FECG from the composite abdominal ECG, the classical time-domain method based on singular value decomposition (SVD) has been generally used. However, this method has some disadvantages, such as its high degree of computational complexity and the necessary assumption that vectors between the FECG and the maternal electrocardiogram (MECG) should be orthogonal. The proposed algorithm, which uses SVD in a discrete cosine transform (DCT) domain, compensates for these disadvantages. To perform SVD with lower computational complexity, DCT coefficients corresponding to high-frequency components were eliminated on the basis of the properties of the DCT coefficients and the frequency characteristics of the FECG. Moreover, to extract the pure FECG with little influence of the direction of the vectors between the FECG and MECG, three new channels were made out of the MECG suppressed in the composite abdominal ECG, and the new channels were appended to the original multi-channel ECG. The performance of the proposed algorithm and the classical time-domain method based on SVD were compared using simulated and real data. It was experimentally verified that the proposed algorithm can extract the pure FECG with reduced computational complexity.

A Comparison Study on the Speech Signal Parameters for Chinese Leaners' Korean Pronunciation Errors - Focused on Korean /ㄹ/ Sound (중국인 학습자의 한국어 발음 오류에 대한 음성 신호 파라미터들의 비교 연구 - 한국어의 /ㄹ/ 발음을 중심으로)

  • Lee, Kang-Hee;You, Kwang-Bock;Lim, Ha-Young
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.7 no.6
    • /
    • pp.239-246
    • /
    • 2017
  • This paper compares the speech signal parameters between Korean and Chinese for Korean pronunciation /ㄹ/, which is caused many errors by Chinese leaners. Allophones of /ㄹ/ in Korean is divided into lateral group and tap group. It has been investigated the reasons for these errors by studying the similarity and the differences between Korean /ㄹ/ pronunciation and its corresponding Chinese pronunciation. In this paper, for the purpose of comparison the speech signal parameters such as energy, waveform in time domain, spectrogram in frequency domain, pitch based on ACF, Formant frequencies are used. From the phonological perspective the speech signal parameters such as signal energy, a waveform in the time domain, a spectrogram in the frequency domain, the pitch (F0) based on autocorrelation function (ACF), Formant frequencies (f1, f2, f3, and f4) are measured and compared. The data, which are composed of the group of Korean words by through a philological investigation, are used and simulated in this paper. According to the simulation results of the energy and spectrogram, there are meaningful differences between Korean native speakers and Chinese leaners for Korean /ㄹ/ pronunciation. The simulation results also show some differences even other parameters. It could be expected that Chinese learners are able to reduce the errors considerably by exploiting the parameters used in this paper.

Web Services Based Biological Data Analysis Tool

  • Kim, Min Kyung;Choi, Yo Hahn;Yoo, Seong Joon;Park, Hyun Seok
    • Genomics & Informatics
    • /
    • v.2 no.3
    • /
    • pp.142-146
    • /
    • 2004
  • Biological data and analysis tools are accumulated in distributed databases and web servers. For this reason, biologists who want to find information from the web should be aware of the various kinds of resources where it is located and how it is retrieved. Integrating the data from heterogeneous biological resources will enable biologists to discover new knowledge across the specific domain boundaries from sequences to expression, structure, and pathway. And inevitably biological databases contain noisy data. Therefore, consensus among databases will confirm the reliability of its contents. We have developed WeSAT that integrates distributed and heterogeneous biological databases and analysis tools, providing through Web Services protocols. In WeSAT, biologists are retrieved specific entries in SWISS-PROT/EMBL, PDB, and KEGG, which have annotated information about sequence, structure, and pathway. And further analysis is carried by integrated services for example homology search and multiple alignments. WeSAT makes it possible to retrieve real time updated data and analysis from the scattered databases in a single platform through Web Services.

FDTD Modeling of the Korean Human Head using MRI Images (MRI 영상을 이용한 한국인 인체 두부의 FDTD 모델링)

  • 이재용;명노훈;최명선;오학태;홍수원;김기회
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.11 no.4
    • /
    • pp.582-591
    • /
    • 2000
  • In this paper, the Finite-Difference Time-Domain(FDTD) modeling method of the Korean human head is introduced to calculate electromagnetic energy absorption for the human head by mobile phones. After MRI scanning data is obtained, 2 dimensional(2D) segmentation is done from the 2D MRI image data by the semi-automatic method. Then, 3D dense segmentation data with $1mm\times1mm\times1mm$ is constructed from the 2D segmentation data. Using the 3D segmentation data, coarse FDTD models of human head that is tilted arbitrarily to model the condition of tilted usage of mobile phone.

  • PDF

Alternating Sunspot Area and Hilbert Transform Analysis

  • Kim, Bang-Yeop;Chang, Heon-Young
    • Journal of Astronomy and Space Sciences
    • /
    • v.28 no.4
    • /
    • pp.261-265
    • /
    • 2011
  • We investigate the sunspot area data spanning from solar cycles 1 (March 1755) to 23 (December 2010) in time domain. For this purpose, we employ the Hilbert transform analysis method, which is used in the field of information theory. One of the most important advantages of this method is that it enables the simultaneous study of associations between the amplitude and the phase in various timescales. In this pilot study, we adopt the alternating sunspot area as a function of time, known as Bracewell transformation. We first calculate the instantaneous amplitude and the instantaneous phase. As a result, we confirm a ~22-year periodic behavior in the instantaneous amplitude. We also find that a behavior of the instantaneous amplitude with longer periodicities than the ~22-year periodicity can also be seen, though it is not as straightforward as the obvious ~22-year periodic behavior revealed by the method currently proposed. In addition to these, we note that the phase difference apparently correlates with the instantaneous amplitude. On the other hand, however, we cannot see any obvious association of the instantaneous frequency and the instantaneous amplitude. We conclude by briefly discussing the current status of development of an algorithm for the solar activity forecast based on the method presented, as this work is a part of that larger project.

Performance Improvement of Infusion Detection System based on Hidden Markov Model through Privilege Flows Modeling (권한이동 모델링을 통한 은닉 마르코프 모델 기반 침입탐지 시스템의 성능 향상)

  • 박혁장;조성배
    • Journal of KIISE:Information Networking
    • /
    • v.29 no.6
    • /
    • pp.674-684
    • /
    • 2002
  • Anomaly detection techniques have teen devised to address the limitations of misuse detection approach for intrusion detection. An HMM is a useful tool to model sequence information whose generation mechanism is not observable and is an optimal modeling technique to minimize false-positive error and to maximize detection rate, However, HMM has the short-coming of login training time. This paper proposes an effective HMM-based IDS that improves the modeling time and performance by only considering the events of privilege flows based on the domain knowledge of attacks. Experimental results show that training with the proposed method is significantly faster than the conventional method trained with all data, as well as no loss of recognition performance.

Ontology Versions Management Schemes using Change Set (변경 집합을 이용한 온톨로지 버전 관리 기법)

  • Yun, Hong-Won;Lee, Jung-Hwa;Kim, Jung-Won
    • Journal of Information Technology Applications and Management
    • /
    • v.12 no.3
    • /
    • pp.27-39
    • /
    • 2005
  • The Semantic Web has increased the interest in ontologies recently Ontology is an essential component of the semantic web and continues to change and evolve. We consider versions management schemes in ontology. We study a set of changes based on domain changes, changes in conceptualization, metadata changes, and temporal dimension. Our change specification is represented by a set of changes. A set of changes consists of instance data change, structural change, and identifier change. In order to support a query in ontology versions, we consider temporal dimension includes valid time. Ontology versioning brings about massive amount of versions to be stored and maintained. We present the ontology versions management schemes that are 1) storing all the change sets, 2) storing the aggregation of change sets periodically, and 3) storing the aggregation of change sets using an adaptive criterion. We conduct a set of experiments to compare the performance of each versions management schemes. We present the experimental results for evaluating the performance of the three version management schemes from scheme 1 to scheme 3. Scheme 1 has the least storage usage. The average response time in Scheme 1 is extremely large, those of Scheme 3 is smaller than Scheme 2. Scheme 3 shows a good performance relatively.

  • PDF

Auto Configuration Module for Logstash in Elasticsearch Ecosystem

  • Ahmed, Hammad;Park, Yoosang;Choi, Jongsun;Choi, Jaeyoung
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.39-42
    • /
    • 2018
  • Log analysis and monitoring have a significant importance in most of the systems. Log management has core importance in applications like distributed applications, cloud based applications, and applications designed for big data. These applications produce a large number of log files which contain essential information. This information can be used for log analytics to understand the relevant patterns from varying log data. However, they need some tools for the purpose of parsing, storing, and visualizing log informations. "Elasticsearch, Logstash, and Kibana"(ELK Stack) is one of the most popular analyzing tools for log management. For the ingestion of log files configuration files have a key importance, as they cover all the services needed to input, process, and output the log files. However, creating configuration files is sometimes very complicated and time consuming in many applications as it requires domain expertise and manual creation. In this paper, an auto configuration module for Logstash is proposed which aims to auto generate the configuration files for Logstash. The primary purpose of this paper is to provide a mechanism, which can be used to auto generate the configuration files for corresponding log files in less time. The proposed module aims to provide an overall efficiency in the log management system.