• Title/Summary/Keyword: minimum approach range

Search Result 73, Processing Time 0.025 seconds

An Adaptive Time Delay Estimation Method Based on Canonical Correlation Analysis (정준형 상관 분석을 이용한 적응 시간 지연 추정에 관한 연구)

  • Lim, Jun-Seok;Hong, Wooyoung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.32 no.6
    • /
    • pp.548-555
    • /
    • 2013
  • The localization of sources has a numerous number of applications. To estimate the position of sources, the relative delay between two or more received signals for the direct signal must be determined. Although the generalized cross-correlation method is the most popular technique, an approach based on eigenvalue decomposition (EVD) is also popular one, which utilizes an eigenvector of the minimum eigenvalue. The performance of the eigenvalue decomposition (EVD) based method degrades in the low SNR and the correlated environments, because it is difficult to select a single eigenvector for the minimum eigenvalue. In this paper, we propose a new adaptive algorithm based on Canonical Correlation Analysis (CCA) in order to extend the operation range to the lower SNR and the correlation environments. The proposed algorithm uses the eigenvector corresponding to the maximum eigenvalue in the generalized eigenvalue decomposition (GEVD). The estimated eigenvector contains all the information that we need for time delay estimation. We have performed simulations with uncorrelated and correlated noise for several SNRs, showing that the CCA based algorithm can estimate the time delays more accurately than the adaptive EVD algorithm.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.

Fundamental Properties of Low Strength Concrete Mixture with Blast Furnace Slag and Sewage Sludge (고로슬래그미분말 및 하수슬러지를 활용한 저강도 콘크리트의 기초적 물성)

  • Kwon, Chil Woo;Lim, Nam Gi
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.17 no.3
    • /
    • pp.136-144
    • /
    • 2013
  • In this study, in order to establish a plan that will enable safe use of renewable resources such as diverse industrial by-products and urban recycled materials, we conducted experiments that focused on flow, bleeding, compressive strength and environmental pollution evaluation to evaluate the material properties of low strength concrete using BFS and SS. In the case of low strength concrete using BFS and SS, blending of at least BFS 6000 within a 30% range regardless of the type of sand used was found to be the most effective approach for improving the workability by securing the minimum unit quantity of water, restraining the bleeding ratio and establishing compressive strength by taking account of the applicability at the work site. In particular, in view of the efficient use of SS, the optimal mixing condition was found to be the mixing of BFS 8000 with in the 30% range, not only for improving the workability restraining the bleeding ratio and establishing the compressive strength but also for application to the work site. Further, the results of tests on hazardous substance content and those of elution tests conducted on soil cement using SS indicated that all values satisfied the environmental standards without any harmful effects on the surrounding environment.

Real-Time Location Identification of Indoor Rescuees at Accident Sites and Location-Based Rescue Response (사고 현장 실시간 실내 인명 위치확인 및 구조대응 연구)

  • Ko, Youngjoo;Shin, Yongbeom;Yoo, Sangwoo;Shin, Dongil
    • Journal of the Korean Institute of Gas
    • /
    • v.25 no.3
    • /
    • pp.46-52
    • /
    • 2021
  • In this study, the on-site location identification and response system was proposed by accurately checking the location information of rescue requesters in the buildings using the smartphone Wi-Fi AP. The location server was requested to measure the strength of the Wi-Fi AP at least 25 times at 8 different building location points. And the accuracy of the position and the error range were checked by analyzing the coordinate values of the received positions. In addition, the response time was measured by changing the conditions of location information in three groups to compare the response time for saving lives with and without location information. The minimum and maximum error values for the eight cases were found to be at least 4.137 m and up to 14.037 m, respectively, with an average error of 9.525 m. Compared to the base transceiver station (BTS) based position error value of 263m, the range could be reduced by up to 93%. When the location information was given, it took 10 minutes and 50 seconds to save lives; however, when there was no location information at all, rescue process took more than 45 minutes. From this research effort, it was analyzed that the acquisition of the location information of rescuees in the building using the smartphone Wi-Fi AP approach is effective in reducing the life-saving time for on-site responses.

Partial Denoising Boundary Image Matching Based on Time-Series Data (시계열 데이터 기반의 부분 노이즈 제거 윤곽선 이미지 매칭)

  • Kim, Bum-Soo;Lee, Sanghoon;Moon, Yang-Sae
    • Journal of KIISE
    • /
    • v.41 no.11
    • /
    • pp.943-957
    • /
    • 2014
  • Removing noise, called denoising, is an essential factor for the more intuitive and more accurate results in boundary image matching. This paper deals with a partial denoising problem that tries to allow a limited amount of partial noise embedded in boundary images. To solve this problem, we first define partial denoising time-series which can be generated from an original image time-series by removing a variety of partial noises and propose an efficient mechanism that quickly obtains those partial denoising time-series in the time-series domain rather than the image domain. We next present the partial denoising distance, which is the minimum distance from a query time-series to all possible partial denoising time-series generated from a data time-series, and we use this partial denoising distance as a similarity measure in boundary image matching. Using the partial denoising distance, however, incurs a severe computational overhead since there are a large number of partial denoising time-series to be considered. To solve this problem, we derive a tight lower bound for the partial denoising distance and formally prove its correctness. We also propose range and k-NN search algorithms exploiting the partial denoising distance in boundary image matching. Through extensive experiments, we finally show that our lower bound-based approach improves search performance by up to an order of magnitude in partial denoising-based boundary image matching.

Fertility Evaluation of Tobacco Field by Quantitative and Qualitative Characteristics of Soils (토양의 정량적 및 정성적 특성을 이용한 연초 경작지의 비옥도 평가)

  • 홍순달;김기인;이윤환;정훈채;김용연
    • Journal of the Korean Society of Tobacco Science
    • /
    • v.22 no.2
    • /
    • pp.123-132
    • /
    • 2000
  • Evaluation method of soil fertility by combination of soil color characteristics and survey data from soil map as well as chemical properties was investigated on total 35 field and pot experiments. Total 35 tobacco fields including 11 fields located at Cheonweon county in Chungnam Province, 9 fields located at Goesan county in Chungbuk Province, and 15 fields located at Youngcheon county in Kyongbuk Province were selected in 1984 to cover the wide range of distribution in landscape and soil attributes. Yields of tobacco grown on the plots of both the pot and field experiment which were not applied with any fertilizer were considered as basic fertility of the soil (BFS). The BFS was estimated by 32 independent variables including 15 chemical properties, 3 color characteristics, and 14 soil survey data from soil map. Twenty-four independent variables containing 16 quantitative variables selected from 24 quantitative variables by collinearity diagnostics and 8 qualitative variables, were classified and analyzed by multiple linear regression (MLR) of REG and GLM models of SAS. Tobacco yield of field experiment showed high variations by eight times in difference between minimum and maximum yield indicating the diverse soil fertility among the experimental fields. Evaluation for the BFS by the MLR including quantitative variables was still more confidential than that by a single index and that showed more improvement of coefficient of determination ($R^2$) in pot experiment than in field experiment. Evaluation for the BFS by MLR in field experiment was still improved by adding qualitative variables as well as quantitative variables. The variability in the BFS of field experiment was explained 43.2% by quantitative variables and 67.95% by adding both the quantitative and qualitative variables compared with 21.7% by simple regression with NO$_3$-N content in soil. The regression evaluation for the best evaluation of the BFS of field experiment by MLR included NO$_3$-N content, L value, and a value of soil color as quantitative variables and available soil depth and topography as qualitative variables. Consequently, it is assumed that this approach by the MLR including both the quantitative and qualitative variables was available as an evaluation model of soil fertility for tobacco field.

  • PDF

A Study on the Ultimate Point Resistance of Rock Socketed Drilled Shafts Using FLAC3D and UDEC (유한차분해석과 개별요소해석을 이용한 암반에 근입된 현장타설말뚝의 선단지지력 연구)

  • Lee, Jae-Hwan;Cho, Hoo-Yeon;You, Kwang-Ho;Jeong, Sang-Seom
    • Journal of the Korean Geotechnical Society
    • /
    • v.28 no.1
    • /
    • pp.29-39
    • /
    • 2012
  • The maximum unit point resistance ($q_{max}$) of rock socketed drilled shafts subjected to axial loads was investigated by a numerical analysis. A 3D Finite Difference Method (FDM) analysis and a Distinct Element Method (DEM) analysis were performed with varying rock elastic modulus (E), discontinuity spacing ($S_j$), discontinuity dip angle ($i_j$), and pile diameter (D). Based on the results of obtained, it was found that the ultimate point resistance ($q_{max}$) increased as rock elastic modulus (E) and rock discontinuity spacing ($S_j$) increased. But, it was found that $q_{max}$ decreased as pile diameter (D) increased. As for the influence of the dip angle of rock discontinuity ($i_j$), it was shown that $q_{max}$ decreased up to 50% of maximum value within the range of $0^{\circ}$ < $i_j$ < $60^{\circ}$ due to the shear failure at rock discontinuities. Furthermore, it was found that if $20^{\circ}{\leq}i_j{\leq}40^{\circ}$, influence of $i_j$ should be taken into account because $q_{max}$ tended to approach a minimum value as $i_j$ approached a value near the friction angle of the discontinuity (${\phi}_j$).

Evaluating the Efficiency of Personal Information Protection Activities in a Private Company: Using Stochastic Frontier Analysis (개인정보처리자의 개인정보보호 활동 효율성 분석: 확률변경분석을 활용하여)

  • Jang, Chul-Ho;Cha, Yun-Ho;Yang, Hyo-Jin
    • Informatization Policy
    • /
    • v.28 no.4
    • /
    • pp.76-92
    • /
    • 2021
  • The value of personal information is increasing with the digital transformation of the 4th Industrial Revolution. The purpose of this study is to analyze the efficiency of personal information protection efforts of 2,000 private companies. It uses a stochastic frontier approach (SFA), a parametric estimation method that measures the absolute efficiency of protective activities. In particular, the personal information activity index is used as an output variable for efficiency analysis, with the personal information protection budget and number of personnel utilized as input variables. As a result of the analysis, efficiency is found to range from a minimum of 0.466 to a maximum of 0.949, and overall average efficiency is 0.818 (81.8%). The main causes of inefficiency include non-fulfillment of personal information management measures, lack of system for promoting personal information protection education, and non-fulfillment of obligations related to CCTV. Policy support is needed to implement safety measures and perform personal information encryption, especially customized support for small and medium-sized enterprises.

Comparing Physical and Thermal Environments Using UAV Imagery and ENVI-met (UAV 영상과 ENVI-met 활용 물리적 환경과 열적 환경 비교)

  • Seounghyeon KIM;Kyunghun PARK;Bonggeun SONG
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.26 no.4
    • /
    • pp.145-160
    • /
    • 2023
  • The purpose of this study was to compare and analyze diurnal thermal environments using Unmanned Aerial Vehicles(UAV)-derived physical parameters(NDVI, SVF) and ENVI-met modeling. The research findings revealed significant correlations, with a significance level of 1%, between UAV-derived NDVI, SVF, and thermal environment elements such as S↑, S↓, L↓, L↑, Land Surface Temperature(LST), and Tmrt. In particular, NDVI showed a strong negative correlation with S↑, reaching a minimum of -0.52** at 12:00, and exhibited a positive correlation of 0.53** or higher with L↓ at all times. A significant negative correlation of -0.61** with LST was observed at 13:00, suggesting the high relevance of NDVI to long-wavelength radiation. Regarding SVF, the results showed a strong relationship with long-wave radiative flux, depending on the SVF range. These research findings offer an integrated approach to evaluating thermal comfort and microclimates in urban areas. Furthermore, they can be applied to understand the impact of urban design and landscape characteristics on pedestrian thermal comfort.

Patient Specific Quality Assurance of IMRT: Quantitative Approach Using Film Dosimetry and Optimization (강도변조방사선치료의 환자별 정도관리: 필름 선량계 및 최적화법을 이용한 정량적 접근)

  • Shin Kyung Hwan;Park Sung-Yong;Park Dong Hyun;Shin Dongho;Park Dahl;Kim Tae Hyun;Pyo Hongryull;Kim Joo-Young;Kim Dae Yong;Cho Kwan Ho;Huh Sun Nyung;Kim Il Han;Park Charn Il
    • Radiation Oncology Journal
    • /
    • v.23 no.3
    • /
    • pp.176-185
    • /
    • 2005
  • Purpose: Film dosimetry as a part of patient specific intensity modulated radiation therapy quality assurance (IMRT QA) was peformed to develop a new optimization method of film isocenter offset and to then suggest new quantitative criteria for film dosimetry. Materials and Methods: Film dosimetry was peformed on 14 IMRT patients with head and neck cancers. An optimization method for obtaining the local minimum was developed to adjust for the error in the film isocenter offset, which is the largest part of the systemic errors. Results: The adjust value of the film isocenter offset under optimization was 1 mm in 12 patients, while only two patients showed 2 mm translation. The means of absolute average dose difference before and after optimization were 2.36 and $1.56\%$, respectively, and the mean ratios over a $5\%$ tolerance were 9.67 and $2.88\%$. After optimization, the differences in the dose decreased dramatically. A low dose range cutoff (L-Cutoff) has been suggested for clinical application. New quantitative criteria of a ratio of over a $5\%$, but less than $10\%$ tolerance, and for an absolute average dose difference less than $3\%$ have been suggested for the verification of film dosimetry. Conclusion: The new optimization method was effective in adjusting for the film dosimetry error, and the newly quantitative criteria suggested in this research are believed to be sufficiently accurate and clinically useful.