• Title/Summary/Keyword: Similarity Criterion

Search Result 93, Processing Time 0.025 seconds

Image Recognition Based on Nonlinear Equalization and Multidimensional Intensity Variation (비선형 평활화와 다차원의 명암변화에 기반을 둔 영상인식)

  • Cho, Yong-Hyun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.504-511
    • /
    • 2014
  • This paper presents a hybrid recognition method, which is based on the nonlinear histogram equalization and the multidimensional intensity variation of an images. The nonlinear histogram equalization based on a adaptively modified function is applied to improve the quality by adjusting the brightness of the image. The multidimensional intensity variation by considering the a extent of 4-step changes in brightness between the adjacent pixels is also applied to reflect accurately the attributes of image. The statistical correlation that is measured by the normalized cross-correlation(NCC) coefficient, is applied to comprehensively measure the similarity between the images. The NCC is considered by the intensity variation of each 2-direction(x-axis and y-axis) image. The proposed method has been applied to the problem for recognizing the 50-face images of 40*40 pixels. The experimental results show that the proposed method has a superior recognition performances to the method without performing the histogram equalization, or the linear histogram equalization, respectively.

Scaleup of Electrolytic Reactors in Pyroprocessing (Pyroprocessing 공정에 사용되는 전해반응장치의 규모 확대)

  • Yoo, Jae-Hyung;Kim, Jeong-Guk;Lee, Han-Soo
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.7 no.4
    • /
    • pp.237-242
    • /
    • 2009
  • In the pyroprocessing of spent nuclear fuels, fuel materials are recovered by electrochemical reactions on the surface of electrodes as well as stirring the electrolyte in electrolytic cells such as electrorefiner, electroreducer and electrowinner. The system with this equipment should first be scaled-up in order to commercialize the pyroprocessing. So in this study, the scale-up for those electrolytic cells was studied to design a large-scale system which can be employed in a commercial process in the future. Basically the dimensions of both electrolytic cells and electrodes should be enlarged on the basis of the geometrical similarity. Then the criterion of constant power input per unit volume, characterizing the fluid behavior in the cells, was introduced in this study and a calculation process based on trial-and-error methode was derived, which makes it possible to seek a proper speed of agitation in the electrolytic cells. Consequently examples of scale-up for an arbitrary small scale system were shown when the criterion of constant power input per unit volume and another criterion of constant impeller tip speed were respectively applied.

  • PDF

Detection of Landmark Spots for Spot Matching in 2DGE (2차원 전기영동 영상의 스팟 정합을 위한 Landmark 스팟쌍의 검출)

  • Han, Chan-Myeong;Suk, Soo-Young;Yoon, Young-Woo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.14 no.3
    • /
    • pp.105-111
    • /
    • 2011
  • Landmark Spots in 2D gel electrophoresis are used in many methods of 2DEG spot matching. Landmark Spots are obtained manually and it is a bottle neck in the entire protein analysis process. Automated landmark spots detection is a very crucial topic in processing a massive amount of 2DGE data. In this paper, Automated landmark spot detection is proposed using point pattern matching and graph theory. Neighbor spots are defined by a graph theory to use and only a centered spot and its neighbor spots are considered for spot matching. Normalized Hausdorff distance is introduced as a criterion for measuring degree of similarity. In the conclusion, the method proposed in this paper can get about 50% of the total spot pairs and the accuracy rate is almost 100%, which the requirements of landmark spots are fully satisfied.

Design Pattern Base4 Component Classification and Retrieval using E-SARM (설계 패턴 기반 컴포넌트 분류와 E-SARM을 이용한 검색)

  • Kim, Gui-Jung;Han, Jung-Soo;Song, Young-Jae
    • The KIPS Transactions:PartD
    • /
    • v.11D no.5
    • /
    • pp.1133-1142
    • /
    • 2004
  • This paper proposes a method to classify and retrieve components in repository using the idea of domain orientation for the successful reuse of components. A design pattern was applied to existing systems and a component classification method is suggested here to compare the structural similarity between each component in relevant domain and criterion patterns. Classifying reusable components by their functionality and then depicting their structures with a diagram can increase component reusability and portability between platforms. Efficiency of component reuse can be raised because the most appropriate component to query and similar candidate components are provided in priority by use of-SARM algorithm.

Enhanced Spectral Hole Substitution for Improving Speech Quality in Low Bit-Rate Audio Coding

  • Lee, Chang-Heon;Kang, Hong-Goo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.29 no.3E
    • /
    • pp.131-139
    • /
    • 2010
  • This paper proposes a novel spectral hole substitution technique for low bit-rate audio coding. The spectral holes frequently occurring in relatively weak energy bands due to zero bit quantization result in severe quality degradation, especially for harmonic signals such as speech vowels. The enhanced aacPlus (EAAC) audio codec artificially adjusts the minimum signal-to-mask ratio (SMR) to reduce the number of spectral holes, but it still produces noisy sound. The proposed method selectively predicts the spectral shapes of hole bands using either intra-band correlation, i.e. harmonically related coefficients nearby or inter-band correlation, i.e. previous frames. For the bands that have low prediction gain, only the energy term is quantized and spectral shapes are replaced by pseudo random values in the decoding stage. To minimize perceptual distortion caused by spectral mismatching, the criterion of the just noticeable level difference (JNLD) and spectral similarity between original and predicted shapes are adopted for quantizing the energy term. Simulation results show that the proposed method implemented into the EAAC baseline coder significantly improves speech quality at low bit-rates while keeping equivalent quality for mixed and music contents.

Effect of Growth Conditions on Saponin Content and Ginsenoside Pattern of Panax ginseng

  • Lee, Mee-Hyoung;Park, Hoon;Lee, Chong-Hwa
    • Proceedings of the Ginseng society Conference
    • /
    • 1987.06a
    • /
    • pp.89-107
    • /
    • 1987
  • For the elucidation of significance of saponin as quality criterion of ginseng ginsenoside content(GC) and ginsenoside pattern similarity(GPS) by simple correlation were investigated in relation to red ginseng quality factors, age, plant part, harvest season, mineral nutrition, soil physical characteristics, growth light and temperature, shading material, growth location, physiological disease and crop stand through survey of ginseng plantstions, field experiments, water culture and phytotron experiments. Effect of tissue culture was also reviewed. GC was negatively correlated with good quality of red ·ginseng and positively with bad quality. Age did not show any consistency with GC but GPS was less with the increase of age difference. GPS was less or not significant between taproot that is lowest in GC and epidermis highest, and significant between leaf and taproot. Harvest season marked with the lowest GC and Pattern was also different. Nutrient imbalance, the increase of hazardous soil nutrient and physical condition to growth increased GC, but GPS was little different. The higher the growth lights intensity and temperature the higher the GC but GPS was little changed. Root rust increased GC, but root scab decreased it. Sponge-like and inside cavity phenomena increased GC. Ginsenoside pattern of cultured tissues and rootlet showed great variation. These results strongly indicate that there are optimum saponin content and ginsenoside pattern and that these are accomplished under the optimum growth condition.

  • PDF

Research on Subjective-type Grading System Using Syntactic-Semantic Tree Comparator (구문의미트리 비교기를 이용한 주관식 문항 채점 시스템에 대한 연구)

  • Kang, WonSeog
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.6
    • /
    • pp.83-92
    • /
    • 2018
  • The subjective question is appropriate for evaluation of deep thinking, but it is not easy to score. Since, regardless of same scoring criterion, the graders are able to produce different scores, we need the objective automatic evaluation system. However, the system has the problem of Korean analysis and comparison. This paper suggests the Korean syntactic analysis and subjective grading system using the syntactic-semantic tree comparator. This system is the hybrid grading system of word based and syntactic-semantic tree based grading. This system grades the answers on the subjective question using the syntactic-semantic comparator. This proposed system has the good result. This system will be utilized in Korean syntactic-semantic analysis, subjective question grading, and document classification.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.

Monte Carlo Algorithm-Based Dosimetric Comparison between Commissioning Beam Data across Two Elekta Linear Accelerators with AgilityTM MLC System

  • Geum Bong Yu;Chang Heon Choi;Jung-in Kim;Jin Dong Cho;Euntaek Yoon;Hyung Jin Choun;Jihye Choi;Soyeon Kim;Yongsik Kim;Do Hoon Oh;Hwajung Lee;Lee Yoo;Minsoo Chun
    • Progress in Medical Physics
    • /
    • v.33 no.4
    • /
    • pp.150-157
    • /
    • 2022
  • Purpose: Elekta synergy® was commissioned in the Seoul National University Veterinary Medical Teaching Hospital. Recently, Chung-Ang University Gwang Myeong Hospital commissioned Elekta Versa HDTM. The beam characteristics of both machines are similar because of the same AgilityTM MLC Model. We compared measured beam data calculated using the Elekta treatment planning system, Monaco®, for each institute. Methods: Beam of the commissioning Elekta linear accelerator were measured in two independent institutes. After installing the beam model based on the measured beam data into the Monaco®, Monte Carlo (MC) simulation data were generated, mimicking the beam data in a virtual water phantom. Measured beam data were compared with the calculated data, and their similarity was quantitatively evaluated by the gamma analysis. Results: We compared the percent depth dose (PDD) and off-axis profiles of 6 MV photon and 6 MeV electron beams with MC calculation. With a 3%/3 mm gamma criterion, the photon PDD and profiles showed 100% gamma passing rates except for one inplane profile at 10 cm depth from VMTH. Gamma analysis of the measured photon beam off-axis profiles between the two institutes showed 100% agreement. The electron beams also indicated 100% agreement in PDD distributions. However, the gamma passing rates of the off-axis profiles were 91%-100% with a 3%/3 mm gamma criterion. Conclusions: The beam and their comparison with MC calculation for each institute showed good performance. Although the measuring tools were orthogonal, no significant difference was found.

Cluster Analysis of the 1000-hPa Height Field around the Korean Peninsula (한반도 주변 1000-hPa 고도장의 군집분석)

  • Jeong, Young-Kun
    • Journal of the Korean earth science society
    • /
    • v.33 no.4
    • /
    • pp.337-349
    • /
    • 2012
  • In this study, we classify the 1000 hPa geopotential height fields around the Korean peninsula through the Kmeans cluster analysis and investigate the occurrence characteristics of each cluster pattern. The 11 clusters are identified as the typical pressure patterns, applying the pattern correlation as a similarity among clusters and the criterion of cluster similarity 0.8, of which three pressure patterns are associated with the extension of Siberia air mass, other three with the latitudes of the longest symmetry axis of North Pacific highs, two with the trough largely under the air mass of Siberia or North Pacific, and the remaining three, the migratory high patterns generally occurring in spring and autumn, are disjointed according to the direction of the longest symmetry axis of highs. The occurrence rate of air masses affecting the Korean peninsula, estimated from the number of occurrence days of 11 pressure patterns, is 55.4% Siberian, 29.3% North Pacific, 12.8% Yangtze-River, 2.5% Okhotsk sea and 68.2% of all these is the continental air masses. The wintertime pressure patterns around the Korean peninsula are nearly contrary to those in summertime, each dominated by the highs extended from the stationary air masses over the Central Siberia and the North Pacific ocean. The migratory highs occur largely in spring and autumn while transferring from the wintertime patterns to summertime patterns, or vice versa. Recently, the occurrence frequency of the highs extended from the North Pacific is on the decrease and while the wintertime pressure patterns occur frequently in spring and autumn, the occurrence frequency of the pressure patterns with trough is on the increase and the migratory highs occur in nearly all seasons.