• Title/Summary/Keyword: coefficient-based method

Search Result 2,699, Processing Time 0.029 seconds

Feature Extraction using Discrete Wavelet Transform and Dynamic Time-Warped Algorithms in Wireless Sensor Networks for Barbed Wire Entanglements Surveillance (철조망 감시를 위한 무선 센서 네트워크에서 이산 웨이블릿 변환과 동적 시간 정합 알고리즘을 이용한 특징 추출)

  • Lee, Tae-Young;Cha, Dae-Hyun;Hong, Jin-Keun;Han, Kun-Hui;Hwang, Chan-Sik
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.11 no.4
    • /
    • pp.1342-1347
    • /
    • 2010
  • Various researches have been studied on WSN(wireless sensor network) for barbed wire entanglements surveillance applications such as industry facilities, security area, prison, military area, airport, etc. Currently, barbed wire entanglements surveillance is formed wire sensor network environment. Traditional wire sensor network guarantee high data transmission rate. Therefore, wire sensor network use fast fourier transform of data of high transmission rate for extraction of feature parameter. However, wireless sensor network in comparison with wire sensor network has very low data transmission rate. Therefore, wireless sensor network doesn't use fast fourier transform of wire sensor network for extraction of feature parameter. In this paper, proposed method use 1 level approximation coefficient of DTW(dynamic time-warped) algorithms based on DWT(discrete wavelet transform) for extraction of detection feature parameter and classification feature parameter for barbed wire entanglements surveillance. l level approximation coefficient have time information and frequency information of signal. Therefore, Dynamic time-warped algorithms based on discrete wavelet transform improve detection and classification of target rather than using energy of signal.

Analytical Study on Behavior Characteristic of Shear Friction on Reinforced Concrete Shear Wall-Foundation Interface using High-Strength Reinforcing Bar (고강도 전단철근을 사용한 철근콘크리트 전단벽체-기초계면에서의 전단마찰 거동특성에 대한 해석적 연구)

  • Cheon, Ju-Hyun;Lee, Ki-Ho;Baek, Jang-Woon;Park, Hong-Gun;Shin, Hyun-Mock
    • Journal of the Korea Concrete Institute
    • /
    • v.28 no.4
    • /
    • pp.473-480
    • /
    • 2016
  • The purpose of this study is to provide analytical method to reasonably evaluate the complicated failure behaviors of shear friction of reinforced concrete shear wall specimens using grade 500 MPa high-strength bars. A total of 16 test specimens with a variety of variables such as aspect ratio, friction coefficient of interface in construction joint, reinforcement details, reinforcement ratio in each direction, material properties were selected and the analysis was performed by using a non-linear finite element analysis program (RCAHEST) applying the modified shear friction constitutive equation in interface based on the concrete design code (KCI, 2012) and CEB-FIP Model code 2010. The mean and coefficient of variation for maximum load from the experiment and analysis results was predicted 1.04 and 17% respectively and properly evaluated failure mode and overall behavior characteristic until failure occur. Based on the results, the analysis program that was applied modified shear friction constitutive equation is judged as having a relatively high reliability for the analysis results.

Development of a Tool to Measure Suffering in Patients with Cancer (암환자의 고통 측정도구 개발에 관한 연구)

  • 강경아
    • Journal of Korean Academy of Nursing
    • /
    • v.29 no.6
    • /
    • pp.1365-1378
    • /
    • 1999
  • This study is a methodological research study to develop an instrument to measure in patients with cancer and to test the validity and reliability of the instrument. The research procedure was as follows : 1) The first step was to develop conceptual framework based on a comprehensive review of the literature and in-depth interviews with patients with cancer. This conceptual framework was organized in to three dimensions (the intrapersonal dimension, the significant-other and context related dimension, the transcendental dimension). Initially 59 items were adopted. 2) These items were analyzed through the index of content validity(CVI) and 53 items were selected which met more than 80% on the CVI. 3) The pretest was carried out with 87 patients with cancer. After the pretest results were analyzed by item analysis, 44 items were selected. A second test of content validity was conducted and 6 items were eliminated considering the 80% CVI. 4) To test for reliability and validity, data collection was done during the period from January 25, 1999, to February 26, 1999. The subjects for the test were 160 patients with cancer and 185 healthy persons. analysis, item analysis and multitrait-multimethod method to analyze validity. The findings are as follows : 1) The Cronbach's alpha coefficient for internal consistency was .92 for the total 38 items and .79, .82, .85, for the three dimensions in that order. 2) The item analysis was based on the corrected item to total correlation coefficient( .30 or more) and information about the alpha estimate if this item was dropped from the scale. 3) As a result of the initial factor analysis using principal component analysis and varimax rotation, one item was deleted because of factor complexity (indiscriminate factor loadings). In the secondary factor analysis, 7 factors with eigenvalue of more than 1.0 were extracted and these factors explained 56 percents of the total variance. The seven factors were labeled as 'family relationship', 'emotional condition', 'physical discomfort', 'meaning and goal of life', 'contextual stimuli', 'change of body image', 'guilt feelings'. 4) The convergence effect between this instrument and the life satisfaction scale was identified and there was significant positive correlation(r= .52, p= .00). The discriminant validity between this instrument and the depression scale(CES-D) was tested and there was significant negative correlation(r= -.50, p= .00). The instrument for accessing the suffering of patients with cancer developed in this study was identified as a tool with a high degree of reliability and validity. In this sense, this tool can be effectively utilized for assessment in caring for patients with cancer.

  • PDF

Accuracy of simulation surgery of Le Fort I osteotomy using optoelectronic tracking navigation system (광학추적항법장치를 이용한 르포씨 제1형 골절단 가상 수술의 정확성에 대한 연구)

  • Bu, Yeon-Ji;Kim, Soung-Min;Kim, Ji-Youn;Park, Jung-Min;Myoung, Hoon;Lee, Jong-Ho;Kim, Myung-Jin
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.37 no.2
    • /
    • pp.114-121
    • /
    • 2011
  • Introduction: The aim of this study was to demonstrate that the simulation surgery on rapid prototype (RP) model, which is based on the 3-dimensional computed tomography (3D CT) data taken before surgery, has the same accuracy as traditional orthograthic surgery with an intermediate splint, using an optoelectronic tracking navigation system. Materials and Methods: Simulation surgery with the same treatment plan as the Le Fort I osteotomy on the patient was done on a RP model based on the 3D CT data of 12 patients who had undergone a Le Fort I osteotomy in the department of oral and maxillofacial surgery, Seoul National University Dental Hospital. The 12 distances between 4 points on the skull, such as both infraorbital foramen and both supraorbital foramen, and 3 points on maxilla, such as the contact point of both maxillary central incisors and mesiobuccal cuspal tip of both maxillary first molars, were tracked using an optoelectronic tracking navigation system. The distances before surgery were compared to evaluate the accuracy of the RP model and the distance changes of 3D CT image after surgery were compared with those of the RP model after simulation surgery. Results: A paired t-test revealed a significant difference between the distances in the 3D CT image and RP model before surgery.(P<0.0001) On the other hand, Pearson's correlation coefficient, 0.995, revealed a significant positive correlation between the distances.(P<0.0001) There was a significant difference between the change in the distance of the 3D CT image and RP model in before and after surgery.(P<0.05) The Pearson's correlation coefficient was 0.13844, indicating positive correlation.(P<0.1) Conclusion: Theses results suggest that the simulation surgery of a Le Fort I osteotomy using an optoelectronic tracking navigation system I s relatively accurate in comparing the pre-, and post-operative 3D CT data. Furthermore, the application of an optoelectronic tracking navigation system may be a predictable and efficient method in Le Fort I orthognathic surgery.

A Methodology for Rain Gauge Network Evaluation Considering the Altitude of Rain Gauge (강우관측소의 설치고도를 고려한 강우관측망 평가방안)

  • Lee, Ji Ho;Jun, Hwan Don
    • Journal of Wetlands Research
    • /
    • v.16 no.1
    • /
    • pp.113-124
    • /
    • 2014
  • The observed rainfall may be different along with the altitude of rain gauge, resulting in the fact that the characteristics of rainfall events occurred in urban or mountainous areas are different. Due to the mountainous effects, in higher altitude, the uncertainty involved in the rainfall observation gets higher so that the density of rain gauges should be more dense. Basically, a methodology for the rain gauge network evaluation, considering this altitude effect of rain gauges can account for the mountainous effects and becomes an important step for forecasting flash flood and calibrating of the radar rainfall. For this reason, in this study, we suggest a methodology for rain gauge network evaluation with consideration of the rain gauge's altitude. To explore the density of rain gauges at each level of altitude, the Equal-Altitude-Ratio of the density of rain gauges, which is based on the fixed amount of elevation and the Equal-Area-Ratio of the density of rain gauges, which is based on the fixed amount of basin area are designed. After these two methods are applied to a real watershed, it is found that the Equal-Area-Ratio generates better results for evaluation of a rain gauge network with consideration of rain gauge's altitude than the Equal-Altitude-Ratio does. In addition, for comparison between the soundness of rain gauge networks in other watersheds, the Coefficient of Variation (CV) of the rain gauge density by the Equal-Area-Ratio is served as the index for the evenness of the distribution of the rain gauge's altitude. The suggested method is applied to the five large watersheds in Korea and it is found that rain gauges installed in a watershed having less value of the CV shows more evenly distributed than the ones in a watershed having higher value of the CV.

Design of Optimized pRBFNNs-based Face Recognition Algorithm Using Two-dimensional Image and ASM Algorithm (최적 pRBFNNs 패턴분류기 기반 2차원 영상과 ASM 알고리즘을 이용한 얼굴인식 알고리즘 설계)

  • Oh, Sung-Kwun;Ma, Chang-Min;Yoo, Sung-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.6
    • /
    • pp.749-754
    • /
    • 2011
  • In this study, we propose the design of optimized pRBFNNs-based face recognition system using two-dimensional Image and ASM algorithm. usually the existing 2 dimensional face recognition methods have the effects of the scale change of the image, position variation or the backgrounds of an image. In this paper, the face region information obtained from the detected face region is used for the compensation of these defects. In this paper, we use a CCD camera to obtain a picture frame directly. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. AdaBoost algorithm is used for the detection of face image between face and non-face image area. We can butt up personal profile by extracting the both face contour and shape using ASM(Active Shape Model) and then reduce dimension of image data using PCA. The proposed pRBFNNs consists of three functional modules such as the condition part, the conclusion part, and the inference part. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of RBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of Differential Evolution. The proposed pRBFNNs are applied to real-time face image database and then demonstrated from viewpoint of the output performance and recognition rate.

Typical Seismic Intensity Calculation for Each Region Using Site Response Analysis (부지응답해석을 이용한 지역별 대표 진도 산출 연구)

  • Ahn, Jae-Kwang;Son, Su-Won
    • Journal of the Korean GEO-environmental Society
    • /
    • v.21 no.1
    • /
    • pp.5-12
    • /
    • 2020
  • Vibration propagated from seismic sources has damping according to distance and amplification and reduction characteristic in different regions according to topography and geological structure. The vibration propagated from the seismic source to the bedrock is largely affected by the damping according to the separation distance, which can be simply estimated through the damping equation. However, it is important to grasp geological information by location because vibration estimation transmitted to the surface are affected by the natural period of the soil located above the bedrock. Geotechnical investigation data are needed to estimate the seismic intensity based on geological information. If there is no Vs profile, the standard penetration tests are mainly used to determine the soil parameters. The Integrated DB Center of National Geotechnical Information manages the geotechnical survey data performed on the domestic ground, and there is the standard penetration test information of 400,000 holes. In this study, the possibility of quantitation the amplification coefficient for each region was examined to calculated the physical interactive seismic intensity based on geotechnical information. At this time, the shear wave column diagram was generated from the SPT-N value and ground response analysis was performed in the target area. The site coefficients for each zone and the seismic intensity distribution for the seismic motion present a significant difference according to the analysis method and the regional setting.

Semantic Process Retrieval with Similarity Algorithms (유사도 알고리즘을 활용한 시맨틱 프로세스 검색방안)

  • Lee, Hong-Joo;Klein, Mark
    • Asia pacific journal of information systems
    • /
    • v.18 no.1
    • /
    • pp.79-96
    • /
    • 2008
  • One of the roles of the Semantic Web services is to execute dynamic intra-organizational services including the integration and interoperation of business processes. Since different organizations design their processes differently, the retrieval of similar semantic business processes is necessary in order to support inter-organizational collaborations. Most approaches for finding services that have certain features and support certain business processes have relied on some type of logical reasoning and exact matching. This paper presents our approach of using imprecise matching for expanding results from an exact matching engine to query the OWL(Web Ontology Language) MIT Process Handbook. MIT Process Handbook is an electronic repository of best-practice business processes. The Handbook is intended to help people: (1) redesigning organizational processes, (2) inventing new processes, and (3) sharing ideas about organizational practices. In order to use the MIT Process Handbook for process retrieval experiments, we had to export it into an OWL-based format. We model the Process Handbook meta-model in OWL and export the processes in the Handbook as instances of the meta-model. Next, we need to find a sizable number of queries and their corresponding correct answers in the Process Handbook. Many previous studies devised artificial dataset composed of randomly generated numbers without real meaning and used subjective ratings for correct answers and similarity values between processes. To generate a semantic-preserving test data set, we create 20 variants for each target process that are syntactically different but semantically equivalent using mutation operators. These variants represent the correct answers of the target process. We devise diverse similarity algorithms based on values of process attributes and structures of business processes. We use simple similarity algorithms for text retrieval such as TF-IDF and Levenshtein edit distance to devise our approaches, and utilize tree edit distance measure because semantic processes are appeared to have a graph structure. Also, we design similarity algorithms considering similarity of process structure such as part process, goal, and exception. Since we can identify relationships between semantic process and its subcomponents, this information can be utilized for calculating similarities between processes. Dice's coefficient and Jaccard similarity measures are utilized to calculate portion of overlaps between processes in diverse ways. We perform retrieval experiments to compare the performance of the devised similarity algorithms. We measure the retrieval performance in terms of precision, recall and F measure? the harmonic mean of precision and recall. The tree edit distance shows the poorest performance in terms of all measures. TF-IDF and the method incorporating TF-IDF measure and Levenshtein edit distance show better performances than other devised methods. These two measures are focused on similarity between name and descriptions of process. In addition, we calculate rank correlation coefficient, Kendall's tau b, between the number of process mutations and ranking of similarity values among the mutation sets. In this experiment, similarity measures based on process structure, such as Dice's, Jaccard, and derivatives of these measures, show greater coefficient than measures based on values of process attributes. However, the Lev-TFIDF-JaccardAll measure considering process structure and attributes' values together shows reasonably better performances in these two experiments. For retrieving semantic process, we can think that it's better to consider diverse aspects of process similarity such as process structure and values of process attributes. We generate semantic process data and its dataset for retrieval experiment from MIT Process Handbook repository. We suggest imprecise query algorithms that expand retrieval results from exact matching engine such as SPARQL, and compare the retrieval performances of the similarity algorithms. For the limitations and future work, we need to perform experiments with other dataset from other domain. And, since there are many similarity values from diverse measures, we may find better ways to identify relevant processes by applying these values simultaneously.

Re-Analysis of Clark Model Based on Drainage Structure of Basin (배수구조를 기반으로 한 Clark 모형의 재해석)

  • Park, Sang Hyun;Kim, Joo Cheol;Jeong, Dong Kug;Jung, Kwan Sue
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.33 no.6
    • /
    • pp.2255-2265
    • /
    • 2013
  • This study presents the width function-based Clark model. To this end, rescaled width function with distinction between hillslope and channel velocity is used as time-area curve and then it is routed through linear storage within the framework of not finite difference scheme used in original Clark model but analytical expression of linear storage routing. There are three parameters focused in this study: storage coefficient, hillslope velocity and channel velocity. SCE-UA, one of the popular global optimization methods, is applied to estimate them. The shapes of resulting IUHs from this study are evaluated in terms of the three statistical moments of hydrologic response functions: mean, variance and the third moment about the center of IUH. The correlation coefficients to the three statistical moments simulated in this study against these of observed hydrographs were estimated at 0.995 for the mean, 0.993 for the variance and 0.983 for the third moment about the center of IUH. The shape of resulting IUHs from this study give rise to satisfactory simulation results in terms of the mean and variance. But the third moment about the center of IUH tend to be overestimated. Clark model proposed in this study is superior to the one only taking into account mean and variance of IUH with respect to skewness, peak discharge and peak time of runoff hydrograph. From this result it is confirmed that the method suggested in this study is useful tool to reflect the heterogeneity of drainage path and hydrodynamic parameters. The variation of statistical moments of IUH are mainly influenced by storage coefficient and in turn the effect of channel velocity is greater than the one of hillslope velocity. Therefore storage coefficient and channel velocity are the crucial factors in shaping the form of IUH and should be considered carefully to apply Clark model proposed in this study.

Derivation of the Synthetic Unit Hydrograph Based on the Watershed Characteristics (유역특성에 의한 합성단위도의 유도에 관한 연구)

  • 서승덕
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.17 no.1
    • /
    • pp.3642-3654
    • /
    • 1975
  • The purpose of this thesis is to derive a unit hydrograph which may be applied to the ungaged watershed area from the relations between directly measurable unitgraph properties such as peak discharge(qp), time to peak discharge (Tp), and lag time (Lg) and watershed characteristics such as river length(L) from the given station to the upstream limits of the watershed area in km, river length from station to centroid of gravity of the watershed area in km (Lca), and main stream slope in meter per km (S). Other procedure based on routing a time-area diagram through catchment storage named Instantaneous Unit Hydrograph(IUH). Dimensionless unitgraph also analysed in brief. The basic data (1969 to 1973) used in these studies are 9 recording level gages and rating curves, 41 rain gages and pluviographs, and 40 observed unitgraphs through the 9 sub watersheds in Nak Oong River basin. The results summarized in these studies are as follows; 1. Time in hour from start of rise to peak rate (Tp) generally occured at the position of 0.3Tb (time base of hydrograph) with some indication of higher values for larger watershed. The base flow is comparelatively higher than the other small watershed area. 2. Te losses from rainfall were divided into initial loss and continuing loss. Initial loss may be defined as that portion of storm rainfall which is intercepted by vegetation, held in deppression storage or infiltrated at a high rate early in the storm and continuing loss is defined as the loss which continues at a constant rate throughout the duration of the storm after the initial loss has been satisfied. Tis continuing loss approximates the nearly constant rate of infiltration (${\Phi}$-index method). The loss rate from this analysis was estimated 50 Per cent to the rainfall excess approximately during the surface runoff occured. 3. Stream slope seems approximate, as is usual, to consider the mainstreamonly, not giving any specific consideration to tributary. It is desirable to develop a single measure of slope that is representative of the who1e stream. The mean slope of channel increment in 1 meter per 200 meters and 1 meter per 1400 meters were defined at Gazang and Jindong respectively. It is considered that the slopes are low slightly in the light of other river studies. Flood concentration rate might slightly be low in the Nak Dong river basin. 4. It found that the watershed lag (Lg, hrs) could be expressed by Lg=0.253 (L.Lca)0.4171 The product L.Lca is a measure of the size and shape of the watershed. For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the watershed characteristics, L and Lca. 5. Expression for basin might be expected to take form containing theslope as {{{{ { L}_{g }=0.545 {( { L. { L}_{ca } } over { SQRT {s} } ) }^{0.346 } }}}} For the logarithms, the correlation coefficient for Lg was 0.97 which defined that Lg is closely related with the basin characteristics too. It should be needed to take care of analysis which relating to the mean slopes 6. Peak discharge per unit area of unitgraph for standard duration tr, ㎥/sec/$\textrm{km}^2$, was given by qp=10-0.52-0.0184Lg with a indication of lower values for watershed contrary to the higher lag time. For the logarithms, the correlation coefficient qp was 0.998 which defined high sign ificance. The peak discharge of the unitgraph for an area could therefore be expected to take the from Qp=qp. A(㎥/sec). 7. Using the unitgraph parameter Lg, the base length of the unitgraph, in days, was adopted as {{{{ {T}_{b } =0.73+2.073( { { L}_{g } } over {24 } )}}}} with high significant correlation coefficient, 0.92. The constant of the above equation are fixed by the procedure used to separate base flow from direct runoff. 8. The width W75 of the unitgraph at discharge equal to 75 per cent of the peak discharge, in hours and the width W50 at discharge equal to 50 Per cent of the peak discharge in hours, can be estimated from {{{{ { W}_{75 }= { 1.61} over { { q}_{b } ^{1.05 } } }}}} and {{{{ { W}_{50 }= { 2.5} over { { q}_{b } ^{1.05 } } }}}} respectively. This provides supplementary guide for sketching the unitgraph. 9. Above equations define the three factors necessary to construct the unitgraph for duration tr. For the duration tR, the lag is LgR=Lg+0.2(tR-tr) and this modified lag, LgRis used in qp and Tb It the tr happens to be equal to or close to tR, further assume qpR=qp. 10. Triangular hydrograph is a dimensionless unitgraph prepared from the 40 unitgraphs. The equation is shown as {{{{ { q}_{p } = { K.A.Q} over { { T}_{p } } }}}} or {{{{ { q}_{p } = { 0.21A.Q} over { { T}_{p } } }}}} The constant 0.21 is defined to Nak Dong River basin. 11. The base length of the time-area diagram for the IUH routing is {{{{C=0.9 {( { L. { L}_{ca } } over { SQRT { s} } ) }^{1/3 } }}}}. Correlation coefficient for C was 0.983 which defined a high significance. The base length of the T-AD was set to equal the time from the midpoint of rain fall excess to the point of contraflexure. The constant K, derived in this studies is K=8.32+0.0213 {{{{ { L} over { SQRT { s} } }}}} with correlation coefficient, 0.964. 12. In the light of the results analysed in these studies, average errors in the peak discharge of the Synthetic unitgraph, Triangular unitgraph, and IUH were estimated as 2.2, 7.7 and 6.4 per cent respectively to the peak of observed average unitgraph. Each ordinate of the Synthetic unitgraph was approached closely to the observed one.

  • PDF