• Title/Summary/Keyword: 사용자평가

Search Result 4,628, Processing Time 0.039 seconds

Major Class Recommendation System based on Deep learning using Network Analysis (네트워크 분석을 활용한 딥러닝 기반 전공과목 추천 시스템)

  • Lee, Jae Kyu;Park, Heesung;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.95-112
    • /
    • 2021
  • In university education, the choice of major class plays an important role in students' careers. However, in line with the changes in the industry, the fields of major subjects by department are diversifying and increasing in number in university education. As a result, students have difficulty to choose and take classes according to their career paths. In general, students choose classes based on experiences such as choices of peers or advice from seniors. This has the advantage of being able to take into account the general situation, but it does not reflect individual tendencies and considerations of existing courses, and has a problem that leads to information inequality that is shared only among specific students. In addition, as non-face-to-face classes have recently been conducted and exchanges between students have decreased, even experience-based decisions have not been made as well. Therefore, this study proposes a recommendation system model that can recommend college major classes suitable for individual characteristics based on data rather than experience. The recommendation system recommends information and content (music, movies, books, images, etc.) that a specific user may be interested in. It is already widely used in services where it is important to consider individual tendencies such as YouTube and Facebook, and you can experience it familiarly in providing personalized services in content services such as over-the-top media services (OTT). Classes are also a kind of content consumption in terms of selecting classes suitable for individuals from a set content list. However, unlike other content consumption, it is characterized by a large influence of selection results. For example, in the case of music and movies, it is usually consumed once and the time required to consume content is short. Therefore, the importance of each item is relatively low, and there is no deep concern in selecting. Major classes usually have a long consumption time because they have to be taken for one semester, and each item has a high importance and requires greater caution in choice because it affects many things such as career and graduation requirements depending on the composition of the selected classes. Depending on the unique characteristics of these major classes, the recommendation system in the education field supports decision-making that reflects individual characteristics that are meaningful and cannot be reflected in experience-based decision-making, even though it has a relatively small number of item ranges. This study aims to realize personalized education and enhance students' educational satisfaction by presenting a recommendation model for university major class. In the model study, class history data of undergraduate students at University from 2015 to 2017 were used, and students and their major names were used as metadata. The class history data is implicit feedback data that only indicates whether content is consumed, not reflecting preferences for classes. Therefore, when we derive embedding vectors that characterize students and classes, their expressive power is low. With these issues in mind, this study proposes a Net-NeuMF model that generates vectors of students, classes through network analysis and utilizes them as input values of the model. The model was based on the structure of NeuMF using one-hot vectors, a representative model using data with implicit feedback. The input vectors of the model are generated to represent the characteristic of students and classes through network analysis. To generate a vector representing a student, each student is set to a node and the edge is designed to connect with a weight if the two students take the same class. Similarly, to generate a vector representing the class, each class was set as a node, and the edge connected if any students had taken the classes in common. Thus, we utilize Node2Vec, a representation learning methodology that quantifies the characteristics of each node. For the evaluation of the model, we used four indicators that are mainly utilized by recommendation systems, and experiments were conducted on three different dimensions to analyze the impact of embedding dimensions on the model. The results show better performance on evaluation metrics regardless of dimension than when using one-hot vectors in existing NeuMF structures. Thus, this work contributes to a network of students (users) and classes (items) to increase expressiveness over existing one-hot embeddings, to match the characteristics of each structure that constitutes the model, and to show better performance on various kinds of evaluation metrics compared to existing methodologies.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

The Effect of Recombinant Human Epidermal Growth Factor on Cisplatin and Radiotherapy Induced Oral Mucositis in Mice (마우스에서 Cisplatin과 방사선조사로 유발된 구내염에 대한 재조합 표피성장인자의 효과)

  • Na, Jae-Boem;Kim, Hye-Jung;Chai, Gyu-Young;Lee, Sang-Wook;Lee, Kang-Kyoo;Chang, Ki-Churl;Choi, Byung-Ock;Jang, Hong-Seok;Jeong, Bea-Keon;Kang, Ki-Mun
    • Radiation Oncology Journal
    • /
    • v.25 no.4
    • /
    • pp.242-248
    • /
    • 2007
  • Purpose: To study the effect of recombinant human epidermal growth factor (rhEGF) on oral mucositis induced by cisplatin and radiotherapy in a mouse model. Materials and Methods: Twenty-four ICR mice were divided into three groups-the normal control group, the no rhEGF group (treatment with cisplatin and radiation) and the rhEGF group (treatment with cisplatin, radiation and rhEGF). A model of mucositis induced by cisplatin and radiotherapy was established by injecting mice with cisplatin (10 mg/kg) on day 1 and with radiation exposure (5 Gy/day) to the head and neck on days $1{\sim}5$. rhEGF was administered subcutaneously on days -1 to 0 (1 mg/kg/day) and on days 3 to 5 (1 mg/kg/day). Evaluation included body weight, oral intake, and histology. Results: For the comparison of the change of body weight between the rhEGF group and the no rhEGF group, a statistically significant difference was observed in the rhEGF group for the 5 days after day 3 of. the experiment. The rhEGF group and no rhEGF group had reduced food intake until day 5 of the experiment, and then the mice demonstrated increased food intake after day 13 of the of experiment. When the histological examination was conducted on day 7 after treatment with cisplatin and radiation, the rhEGF group showed a focal cellular reaction in the epidermal layer of the mucosa, while the no rhEGF group did not show inflammation of the oral mucosa. Conclusion: These findings suggest that rhEGF has a potential to reduce the oral mucositis burden in mice after treatment with cisplatin and radiation. The optimal dose, number and timing of the administration of rhEGF require further investigation.

A Study on the Establishment of Comparison System between the Statement of Military Reports and Related Laws (군(軍) 보고서 등장 문장과 관련 법령 간 비교 시스템 구축 방안 연구)

  • Jung, Jiin;Kim, Mintae;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.109-125
    • /
    • 2020
  • The Ministry of National Defense is pushing for the Defense Acquisition Program to build strong defense capabilities, and it spends more than 10 trillion won annually on defense improvement. As the Defense Acquisition Program is directly related to the security of the nation as well as the lives and property of the people, it must be carried out very transparently and efficiently by experts. However, the excessive diversification of laws and regulations related to the Defense Acquisition Program has made it challenging for many working-level officials to carry out the Defense Acquisition Program smoothly. It is even known that many people realize that there are related regulations that they were unaware of until they push ahead with their work. In addition, the statutory statements related to the Defense Acquisition Program have the tendency to cause serious issues even if only a single expression is wrong within the sentence. Despite this, efforts to establish a sentence comparison system to correct this issue in real time have been minimal. Therefore, this paper tries to propose a "Comparison System between the Statement of Military Reports and Related Laws" implementation plan that uses the Siamese Network-based artificial neural network, a model in the field of natural language processing (NLP), to observe the similarity between sentences that are likely to appear in the Defense Acquisition Program related documents and those from related statutory provisions to determine and classify the risk of illegality and to make users aware of the consequences. Various artificial neural network models (Bi-LSTM, Self-Attention, D_Bi-LSTM) were studied using 3,442 pairs of "Original Sentence"(described in actual statutes) and "Edited Sentence"(edited sentences derived from "Original Sentence"). Among many Defense Acquisition Program related statutes, DEFENSE ACQUISITION PROGRAM ACT, ENFORCEMENT RULE OF THE DEFENSE ACQUISITION PROGRAM ACT, and ENFORCEMENT DECREE OF THE DEFENSE ACQUISITION PROGRAM ACT were selected. Furthermore, "Original Sentence" has the 83 provisions that actually appear in the Act. "Original Sentence" has the main 83 clauses most accessible to working-level officials in their work. "Edited Sentence" is comprised of 30 to 50 similar sentences that are likely to appear modified in the county report for each clause("Original Sentence"). During the creation of the edited sentences, the original sentences were modified using 12 certain rules, and these sentences were produced in proportion to the number of such rules, as it was the case for the original sentences. After conducting 1 : 1 sentence similarity performance evaluation experiments, it was possible to classify each "Edited Sentence" as legal or illegal with considerable accuracy. In addition, the "Edited Sentence" dataset used to train the neural network models contains a variety of actual statutory statements("Original Sentence"), which are characterized by the 12 rules. On the other hand, the models are not able to effectively classify other sentences, which appear in actual military reports, when only the "Original Sentence" and "Edited Sentence" dataset have been fed to them. The dataset is not ample enough for the model to recognize other incoming new sentences. Hence, the performance of the model was reassessed by writing an additional 120 new sentences that have better resemblance to those in the actual military report and still have association with the original sentences. Thereafter, we were able to check that the models' performances surpassed a certain level even when they were trained merely with "Original Sentence" and "Edited Sentence" data. If sufficient model learning is achieved through the improvement and expansion of the full set of learning data with the addition of the actual report appearance sentences, the models will be able to better classify other sentences coming from military reports as legal or illegal. Based on the experimental results, this study confirms the possibility and value of building "Real-Time Automated Comparison System Between Military Documents and Related Laws". The research conducted in this experiment can verify which specific clause, of several that appear in related law clause is most similar to the sentence that appears in the Defense Acquisition Program-related military reports. This helps determine whether the contents in the military report sentences are at the risk of illegality when they are compared with those in the law clauses.

Different Uptake of Tc-99m ECD and Tc-99m HMPAO in the Normal Brains: Analysis by Statistical Parametric Mapping (정상 뇌 혈류 영상에서 방사성의약품에 따라 혈류 분포에 차이가 있는가: 통계적 파라미터 지도를 사용한 분석)

  • Kim, Euy-Neyng;Jung, Yong-An;Sohn, Hyung-Sun;Kim, Sung-Hoon;Yoo, Ie-Ryung;Chung, Soo-Kyo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.36 no.4
    • /
    • pp.244-254
    • /
    • 2002
  • Purpose: This study investigated the differences between technetium-99m ethyl cysteinate dimer (Tc-99m ECD) and technetium-99m hexamethylpropylene amine oxime (Tc-99m HMPAO) uptake in the normal brain by means of statistical parametric mapping (SPM) analysis. Materials and Methods: We retrospectively analyzed age and sex matched 53 cases of normal brain SPECT. Thirty-two cases were obtained with Tc-99m ECD and 21 cases with Tc-99m HMPAO. There were no abnormal findings on brain MRIs. All of the SPECT images were spatially transformed to standard space, smoothed and globally normalized. The differences between the Tc-99m ECD and Tc-99m HMPAO SPECT images were statistically analyzed using statistical parametric mapping (SPM'99) software. The differences bgetween the two groups were considered significant ant a threshold of corrected P values less than 0.05. Results: SPM analysis revealed significantly different uptakes of Tc-99m ECD and Tc-99m HMPAO in the normal brains. On the Tc-99m ECD SPECT images, relatively higher uptake was observed in the frontal, parietal and occipital lobes, in the basal ganglia and thalamus, and in the superior region of the cerebellum. On the Tc-99m HMPAO SPECT images, relatively higher uptakes was observed in subcortical areas of the frontal region, temporal lobe, and posterior portion of inferior cerebellum. Conclusion: Uptake of Tc-99m ECD and Tc-99m HMPO in the normallooking brain was significantly different on SPM analysis. The selective use of Tc-99m ECD of Tc-99m HMPAO in brain SPECT imaging appears especially valuable for the interpretation of cerebral perfusion. Further investigation is necessary to determine which tracer is more accurate for diagnosing different clinical conditions.

Application and Analysis of Ocean Remote-Sensing Reflectance Quality Assurance Algorithm for GOCI-II (천리안해양위성 2호(GOCI-II) 원격반사도 품질 검증 시스템 적용 및 결과)

  • Sujung Bae;Eunkyung Lee;Jianwei Wei;Kyeong-sang Lee;Minsang Kim;Jong-kuk Choi;Jae Hyun Ahn
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1565-1576
    • /
    • 2023
  • An atmospheric correction algorithm based on the radiative transfer model is required to obtain remote-sensing reflectance (Rrs) from the Geostationary Ocean Color Imager-II (GOCI-II) observed at the top-of-atmosphere. This Rrs derived from the atmospheric correction is utilized to estimate various marine environmental parameters such as chlorophyll-a concentration, total suspended materials concentration, and absorption of dissolved organic matter. Therefore, an atmospheric correction is a fundamental algorithm as it significantly impacts the reliability of all other color products. However, in clear waters, for example, atmospheric path radiance exceeds more than ten times higher than the water-leaving radiance in the blue wavelengths. This implies atmospheric correction is a highly error-sensitive process with a 1% error in estimating atmospheric radiance in the atmospheric correction process can cause more than 10% errors. Therefore, the quality assessment of Rrs after the atmospheric correction is essential for ensuring reliable ocean environment analysis using ocean color satellite data. In this study, a Quality Assurance (QA) algorithm based on in-situ Rrs data, which has been archived into a database using Sea-viewing Wide Field-of-view Sensor (SeaWiFS) Bio-optical Archive and Storage System (SeaBASS), was applied and modified to consider the different spectral characteristics of GOCI-II. This method is officially employed in the National Oceanic and Atmospheric Administration (NOAA)'s ocean color satellite data processing system. It provides quality analysis scores for Rrs ranging from 0 to 1 and classifies the water types into 23 categories. When the QA algorithm is applied to the initial phase of GOCI-II data with less calibration, it shows the highest frequency at a relatively low score of 0.625. However, when the algorithm is applied to the improved GOCI-II atmospheric correction results with updated calibrations, it shows the highest frequency at a higher score of 0.875 compared to the previous results. The water types analysis using the QA algorithm indicated that parts of the East Sea, South Sea, and the Northwest Pacific Ocean are primarily characterized as relatively clear case-I waters, while the coastal areas of the Yellow Sea and the East China Sea are mainly classified as highly turbid case-II waters. We expect that the QA algorithm will support GOCI-II users in terms of not only statistically identifying Rrs resulted with significant errors but also more reliable calibration with quality assured data. The algorithm will be included in the level-2 flag data provided with GOCI-II atmospheric correction.

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

A Study on the Emotional Reaction to the Interior Design - Focusing on the Worship Space in the Church Buildings - (실내공간 구성요소에 의한 감성반응 연구 - 기독교 예배공간 강단부를 중심으로 -)

  • Lee, Hyun-Jeong;Lee, Gyoo-Baek
    • Archives of design research
    • /
    • v.18 no.4 s.62
    • /
    • pp.257-266
    • /
    • 2005
  • The purpose of this study is to investigate the psychological reaction to the image of the worship space in the church buildings and to quantify its contribution of the stimulation elements causing such reaction, and finally to suggest basic data for realizing emotional worship space of the church architecture. For this, 143 christians were surveyed to analyze the relationship between 23 emotional expressions extracted from the worship space and 32 images of the worship space. The combined data was described with the two dimensional dispersion using the quantification theory III. The analysis found out that 'simplicity-complexity' of the image consisted of the horizontal axis (the x-axis) and 'creativity' of the image the vertical axis(the y-axis). In addition, to extract the causal relationship between the value of emotional reaction and its stimulation elements quantitatively, the author indicated 4 emotional word groups such as simple, sublime for x-axis and typical creative for y-axis based on its similarity by the cluster analysis, The quantification theory I was also used with total value of equivalent emotional words as the standard variance and the emotional stimulation elements of the worship space as the independent variance. 9 specific examples of the emotional stimulation elements were selected including colors and shapes of the wall and the ceiling, shapes and finish of the floor materials, window shapes, and the use of the symbolic elements. Furthermore, 31 subcategories were also chosen to analyse their contribution on the emotional reaction. As a result, the color and finish of the wall found to be the most effective element on the subjects' emotional reaction, while the symbolic elements and the color of the wall found to be the least effective. It is estimated that the present study would be helpful to increase the emotional satisfaction of the users and to approach a spatial design through satisfying the types and purposes of the space.

  • PDF

Evaluation of Effective and Organ Dose Using PCXMC Program in DUKE Phantom and Added Filter for Computed Radiography System (CR 환경에서의 흉부촬영 시 Duke Phantom과 부가여과를 이용한 유효선량 및 장기선량 평가)

  • Kang, Byung-Sam;Park, Min-Joo;Kim, Seung-Chul
    • Journal of radiological science and technology
    • /
    • v.37 no.1
    • /
    • pp.7-14
    • /
    • 2014
  • By using a Chest Phantom(DUKE Phantom) focusing on dose reduction of diagnostic radiation field with the most use of artificial radiation, and attempt to reduce radiation dose studies technical radiation. Publisher of the main user of the X-ray Radiological technologists, Examine the effect of reducing the radiation dose to apply additional filtering of the X-ray generator. In order to understand the organ dose and effective dose by using the PC-Based Monte Carlo Program(PCXMC) Program, the patient receives, was carried out this research. In this experiment, by applying a complex filter using a copper and Al(aluminum,13) and filtered single of using only aluminum with the condition set, and measures the number of the disk of copper indicated by DUKE Phantom. The combination of the composite filtration and filtration of a single number of the disk of the copper is the same, with the PCXMC 2.0. Program looking combination of additional filtration fewest absorbed dose was calculated effective dose and organ dose. Although depends on the use mAs, The 80 kVp AP projection conditions, it is possible to reduce the effective amount of about 84 % from about 30 % to a maximum at least. The 120 kVp PA projection conditions, it is possible to reduce the effective amount of about 71 % from about 41 % to a maximum of at least. The organ dose, dose reduction rate was different in each organ, but it showed a decrease of dose rate of 30 % to up 100 % at least. Additional filtration was used on the imaging conditions throughout the study. There was no change in terms of video quality at low doses. It was found that using the DUKE Phantom and PCXMC 2.0 Program were suitable to calculate the effect of reducing the effective dose and organ dose.

Development and Evaluation of Traffic Conflict Criteria at an intersection (교차로 교통상충기준 개발 및 평가에 관한 연구)

  • 하태준;박형규;박제진;박찬모
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.2
    • /
    • pp.105-115
    • /
    • 2002
  • For many rears, traffic accident statistics are the most direct measure of safety for a signalized intersection. However it takes more than 2 or 3 yearn to collect certain accident data for adequate sample sizes. And the accident data itself is unreliable because of the difference between accident data recorded and accident that is actually occurred. Therefore, it is rather difficult to evaluate safety for a intersection by using accident data. For these reasons, traffic conflict technique(TCT) was developed as a buick and accurate counter-measure of safety for a intersection. However, the collected conflict data is not always reliable because there is absence of clear criteria for conflict. This study developed objective and accurate conflict criteria, which is shown below based on traffic engineering theory. Frist, the rear-end conflict is regarded, when the following vehicle takes evasive maneuver against the first vehicle within a certain distance, according to car-following theory. Second, lane-change conflict is regarded when the following vehicle takes evasive maneuver against first vehicle which is changing its lane within the minimum stopping distance of the following vehicle. Third, cross and opposing-left turn conflicts are regarded when the vehicle which receives green sign takes evasive maneuver against the vehicle which lost its right-of-way crossing a intersection. As a result of correlation analysis between conflict and accident, it is verified that the suggested conflict criteria in this study ave applicable. And it is proven that estimating safety evaluation for a intersection with conflict data is possible, according to the regression analysis preformed between accident and conflict, EPDO accident and conflict. Adopting the conflict criteria suggested in this study would be both quick and accurate method for diagnosing safety and operational deficiencies and for evaluation improvements at intersections. Further research is required to refine the suggested conflict criteria to extend its application. In addition, it is necessary to develope other types of conflict criteria, not included in this study, in later study.