• Title/Summary/Keyword: F-Measure

Search Result 1,401, Processing Time 0.026 seconds

Increasing Accuracy of Classifying Useful Reviews by Removing Neutral Terms (중립도 기반 선택적 단어 제거를 통한 유용 리뷰 분류 정확도 향상 방안)

  • Lee, Minsik;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.129-142
    • /
    • 2016
  • Customer product reviews have become one of the important factors for purchase decision makings. Customers believe that reviews written by others who have already had an experience with the product offer more reliable information than that provided by sellers. However, there are too many products and reviews, the advantage of e-commerce can be overwhelmed by increasing search costs. Reading all of the reviews to find out the pros and cons of a certain product can be exhausting. To help users find the most useful information about products without much difficulty, e-commerce companies try to provide various ways for customers to write and rate product reviews. To assist potential customers, online stores have devised various ways to provide useful customer reviews. Different methods have been developed to classify and recommend useful reviews to customers, primarily using feedback provided by customers about the helpfulness of reviews. Most shopping websites provide customer reviews and offer the following information: the average preference of a product, the number of customers who have participated in preference voting, and preference distribution. Most information on the helpfulness of product reviews is collected through a voting system. Amazon.com asks customers whether a review on a certain product is helpful, and it places the most helpful favorable and the most helpful critical review at the top of the list of product reviews. Some companies also predict the usefulness of a review based on certain attributes including length, author(s), and the words used, publishing only reviews that are likely to be useful. Text mining approaches have been used for classifying useful reviews in advance. To apply a text mining approach based on all reviews for a product, we need to build a term-document matrix. We have to extract all words from reviews and build a matrix with the number of occurrences of a term in a review. Since there are many reviews, the size of term-document matrix is so large. It caused difficulties to apply text mining algorithms with the large term-document matrix. Thus, researchers need to delete some terms in terms of sparsity since sparse words have little effects on classifications or predictions. The purpose of this study is to suggest a better way of building term-document matrix by deleting useless terms for review classification. In this study, we propose neutrality index to select words to be deleted. Many words still appear in both classifications - useful and not useful - and these words have little or negative effects on classification performances. Thus, we defined these words as neutral terms and deleted neutral terms which are appeared in both classifications similarly. After deleting sparse words, we selected words to be deleted in terms of neutrality. We tested our approach with Amazon.com's review data from five different product categories: Cellphones & Accessories, Movies & TV program, Automotive, CDs & Vinyl, Clothing, Shoes & Jewelry. We used reviews which got greater than four votes by users and 60% of the ratio of useful votes among total votes is the threshold to classify useful and not-useful reviews. We randomly selected 1,500 useful reviews and 1,500 not-useful reviews for each product category. And then we applied Information Gain and Support Vector Machine algorithms to classify the reviews and compared the classification performances in terms of precision, recall, and F-measure. Though the performances vary according to product categories and data sets, deleting terms with sparsity and neutrality showed the best performances in terms of F-measure for the two classification algorithms. However, deleting terms with sparsity only showed the best performances in terms of Recall for Information Gain and using all terms showed the best performances in terms of precision for SVM. Thus, it needs to be careful for selecting term deleting methods and classification algorithms based on data sets.

An Effective Block of Radioactive Gases for the Storage During the Synthesis of Radiopharmaceutical (방사성의약품 합성에서 발생하는 방사성기체의 효율적 차단)

  • Chi, Yong Gi;Kim, Dong Il;Kim, Si Hwal;Won, Moon Hee;Choe, Seong-Uk;Choi, Choon Ki;Seok, Jae Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.126-130
    • /
    • 2012
  • Purpose : Methode an effective block was investigated to deal with volatile radioactive gas, short lived radioactive waste generated as a result of the routinely produced radiopharmaceuticals FDG (2-deoxy-2-[$^{18}F$]fluoro-D-glucose) and compound with $^{11}C$. Materials and Methods : All components of the radiation stack monitoring and data management system for continuous radioactive gas detection in the air extract system purchase from fixed noble gas monitor of Berthold company. TEDLAR gas sampling bags purchase from the Dongbanghitech company. TEDLAR gas sampling bags (volume: 10 L) connected via paraflex or PTFE tubing and Teflon 3 way stopcock. When installing TEDLAR gas sampling bags in Hot cell on the inside and not radioactive gas concentrations were compared. According to whether the Hot cell inside a activated carbon filter installed, compare the difference in concentration of the radioactive gas $^{18}F$. Comparison of radiation emission concentration difference of module a FASTlab and TRACElab. Results : Activated carbon filter are installed in the Hot cell, a measure of the concentration of radioactive gas was 8 $Bq/m^3$. Without activated carbone filter in the hot cell was 300 $Bq/m^3$. Tedlar bag prior to installation of the radioactive gases a measure of the concentration was 3,500 $Bq/m^3$, $^{11}C$ synthesis of the measured concentration was 27,000 $Bq/m^3$. After installed a Tedlar bag and a measure concentration of the radioactive gases was 300 $Bq/m^3$ and $^{11}C$ synthesis was 1,000$Bq/m^3$. Conclusion : $^{11}C$ radioactive gas that was ejected out of the Hot cell, with the use of a Tedlar gas sampling bag stored inside. A compound of 11C is not absorbed onto activated carbon filter. But can block the release out by storing in a Tedlar gas sampling bag. We was able to reduce the radiation exposure of the worker by efficient radiation protection.

  • PDF

Detection of Protein Subcellular Localization based on Syntactic Dependency Paths (구문 의존 경로에 기반한 단백질의 세포 내 위치 인식)

  • Kim, Mi-Young
    • The KIPS Transactions:PartB
    • /
    • v.15B no.4
    • /
    • pp.375-382
    • /
    • 2008
  • A protein's subcellular localization is considered an essential part of the description of its associated biomolecular phenomena. As the volume of biomolecular reports has increased, there has been a great deal of research on text mining to detect protein subcellular localization information in documents. It has been argued that linguistic information, especially syntactic information, is useful for identifying the subcellular localizations of proteins of interest. However, previous systems for detecting protein subcellular localization information used only shallow syntactic parsers, and showed poor performance. Thus, there remains a need to use a full syntactic parser and to apply deep linguistic knowledge to the analysis of text for protein subcellular localization information. In addition, we have attempted to use semantic information from the WordNet thesaurus. To improve performance in detecting protein subcellular localization information, this paper proposes a three-step method based on a full syntactic dependency parser and WordNet thesaurus. In the first step, we constructed syntactic dependency paths from each protein to its location candidate, and then converted the syntactic dependency paths into dependency trees. In the second step, we retrieved root information of the syntactic dependency trees. In the final step, we extracted syn-semantic patterns of protein subtrees and location subtrees. From the root and subtree nodes, we extracted syntactic category and syntactic direction as syntactic information, and synset offset of the WordNet thesaurus as semantic information. According to the root information and syn-semantic patterns of subtrees from the training data, we extracted (protein, localization) pairs from the test sentences. Even with no biomolecular knowledge, our method showed reasonable performance in experimental results using Medline abstract data. Our proposed method gave an F-measure of 74.53% for training data and 58.90% for test data, significantly outperforming previous methods, by 12-25%.

Improved Sentence Boundary Detection Method for Web Documents (웹 문서를 위한 개선된 문장경계인식 방법)

  • Lee, Chung-Hee;Jang, Myung-Gil;Seo, Young-Hoon
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.6
    • /
    • pp.455-463
    • /
    • 2010
  • In this paper, we present an approach to sentence boundary detection for web documents that builds on statistical-based methods and uses rule-based correction. The proposed system uses the classification model learned offline using a training set of human-labeled web documents. The web documents have many word-spacing errors and frequently no punctuation mark that indicates the end of sentence boundary. As sentence boundary candidates, the proposed method considers every Ending Eomis as well as punctuation marks. We optimize engine performance by selecting the best feature, the best training data, and the best classification algorithm. For evaluation, we made two test sets; Set1 consisting of articles and blog documents and Set2 of web community documents. We use F-measure to compare results on a large variety of tasks, Detecting only periods as sentence boundary, our basis engine showed 96.5% in Set1 and 56.7% in Set2. We improved our basis engine by adapting features and the boundary search algorithm. For the final evaluation, we compared our adaptation engine with our basis engine in Set2. As a result, the adaptation engine obtained improvements over the basis engine by 39.6%. We proved the effectiveness of the proposed method in sentence boundary detection.

The big data method for flash flood warning (돌발홍수 예보를 위한 빅데이터 분석방법)

  • Park, Dain;Yoon, Sanghoo
    • Journal of Digital Convergence
    • /
    • v.15 no.11
    • /
    • pp.245-250
    • /
    • 2017
  • Flash floods is defined as the flooding of intense rainfall over a relatively small area that flows through river and valley rapidly in short time with no advance warning. So that it can cause damage property and casuality. This study is to establish the flash-flood warning system using 38 accident data, reported from the National Disaster Information Center and Land Surface Model(TOPLATS) between 2009 and 2012. Three variables were used in the Land Surface Model: precipitation, soil moisture, and surface runoff. The three variables of 6 hours preceding flash flood were reduced to 3 factors through factor analysis. Decision tree, random forest, Naive Bayes, Support Vector Machine, and logistic regression model are considered as big data methods. The prediction performance was evaluated by comparison of Accuracy, Kappa, TP Rate, FP Rate and F-Measure. The best method was suggested based on reproducibility evaluation at the each points of flash flood occurrence and predicted count versus actual count using 4 years data.

Dose Distribution in Solid Phantom by TLD with a Metal Plate of Various Thicknesses (다양한 두께의 금속판을 얹은 TLD를 이용하여 구한, 고체 팬텀 내에서의 선량분포)

  • Kim, Sookil
    • Progress in Medical Physics
    • /
    • v.10 no.2
    • /
    • pp.83-88
    • /
    • 1999
  • Purpose: TLD experiments were set up to measure the dose distribution and to analyze the influence on dose measurement of thin metal plate and solid water phantom. The aim of the present study was to investigate the build-up effect of metal plate loaded on TLD chip and depth dose in the controlled environment of phantom measurements. Materials and Methods: Measurements were done by using LiF TLD-100 loaded by a thin metal plate with the same surface area (3.2$\times$3.2 $\textrm{mm}^2$) as TLD chip. TLD chips loaded with one metal plate from three different metal plate (Tin, Copper, Gold) of different thicknesses (0.1, 0.15, 0.2, 0.3 mm) were used respectively to measure radiation dose. Using the TLD loaded with one metal plate, surface dose and the depth dose at the build-up maximum region were investigated. Results: Using a metal plate on TLD chip increased the surface dose. Surface dose curve shows the dose build-up against equivalent thickness of metal to water. The values of TL reading obtained by using metal plate at depth of build-up maximum are about 8% to 13% lower than those obtained by normal TLD chip. Conclusion: The metal technique used for TLD dosimetry could provide clinicals information about the build-up of dose up to 4.2mm depth in addition to a depth dose distribution. The results of TLD with a metal plate measurements may help with decisions to boost or bolus certain areas of the skin.

  • PDF

Spam-Mail Filtering System Using Weighted Bayesian Classifier (가중치가 부여된 베이지안 분류자를 이용한 스팸 메일 필터링 시스템)

  • 김현준;정재은;조근식
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.8
    • /
    • pp.1092-1100
    • /
    • 2004
  • An E-mails have regarded as one of the most popular methods for exchanging information because of easy usage and low cost. Meanwhile, exponentially growing unwanted mails in user's mailbox have been raised as main problem. Recognizing this issue, Korean government established a law in order to prevent e-mail abuse. In this paper we suggest hybrid spam mail filtering system using weighted Bayesian classifier which is extended from naive Bayesian classifier by adding the concept of preprocessing and intelligent agents. This system can classify spam mails automatically by using training data without manual definition of message rules. Particularly, we improved filtering efficiency by imposing weight on some character by feature extraction from spam mails. Finally, we show efficiency comparison among four cases - naive Bayesian, weighting on e-mail header, weighting on HTML tags, weighting on hyperlinks and combining all of four cases. As compared with naive Bayesian classifier, the proposed system obtained 5.7% decreased precision, while the recall and F-measure of this system increased by 33.3% and 31.2%, respectively.

Prediction Model for Popularity of Online Articles based on Analysis of Hit Count (온라인 게시글의 조회수 분석을 통한 인기도 예측)

  • Kim, Su-Do;Cho, Hwan-Gue
    • The Journal of the Korea Contents Association
    • /
    • v.12 no.4
    • /
    • pp.40-51
    • /
    • 2012
  • Online discussion bulletin in Korea is not only a specific place where user exchange opinions but also a public sphere through which users discuss and form public opinion. Sometimes, there is a heated debate on a topic and any article becomes a political or sociological issue. In this paper, we propose how to analyze the popularity of articles by collecting the information of articles obtained from two well-known discussion forums such as AGORA and SEOPRISE. And we propose a prediction model for the article popularity by applying the characteristics of subject articles. Our experiment shown that the popularity of 87.52% articles have been saturated within a day after the submission in AGORA, but the popularity of 39% articles is growing after 4 days passed in SEOPRISE. And we observed that there is a low correlation between the period of popularity and the hit count. The steady increase of the hit count of an article does not necessarily imply the final hit count of the article at the saturation point is so high. In this paper, we newly propose a new prediction model called 'baseline'. We evaluated the predictability for popular articles using three models (SVM, similar matching and baseline). Through the results of performance evaluation, we observed that SVM model is the best in F-measure and precision, but baseline is the best in running time.

A Small-area Hardware Implementation of EGML-based Moving Object Detection Processor (EGML 기반 이동객체 검출 프로세서의 저면적 하드웨어 구현)

  • Sung, Mi-ji;Shin, Kyung-wook
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.12
    • /
    • pp.2213-2220
    • /
    • 2017
  • This paper proposes an efficient approach for hardware implementation of moving object detection (MOD) processor using effective Gaussian mixture learning (EGML)-based background subtraction method. Arithmetic units used in background generation were implemented using LUT-based approximation to reduce hardware complexity. Hardware resources used for both background subtraction and Gaussian probability density calculation were shared. The MOD processor was verified by FPGA-in-the-loop simulation using MATLAB/Simulink. The MOD performance was evaluated by using six types of video defined in IEEE CDW-2014 dataset, which resulted the average of recall value of 0.7700, the average of precision value of 0.7170, and the average of F-measure value of 0.7293. The MOD processor was implemented with 882 slices and block RAM of $146{\times}36kbits$ on Virtex5 FPGA, resulting in 60% hardware reduction compared to conventional design based on EGML. It was estimated that the MOD processor could operate with 75 MHz clock, resulting in real-time processing of $800{\times}600$ video with a frame rate of 39 fps.

Query Expansion and Term Weighting Method for Document Filtering (문서필터링을 위한 질의어 확장과 가중치 부여 기법)

  • Shin, Seung-Eun;Kang, Yu-Hwan;Oh, Hyo-Jung;Jang, Myung-Gil;Park, Sang-Kyu;Lee, Jae-Sung;Seo, Young-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.7
    • /
    • pp.743-750
    • /
    • 2003
  • In this paper, we propose a query expansion and weighting method for document filtering to increase precision of the result of Web search engines. Query expansion for document filtering uses ConceptNet, encyclopedia and documents of 10% high similarity. Term weighting method is used for calculation of query-documents similarity. In the first step, we expand an initial query into the first expanded query using ConceptNet and encyclopedia. And then we weight the first expanded query and calculate the first expanded query-documents similarity. Next, we create the second expanded query using documents of top 10% high similarity and calculate the second expanded query- documents similarity. We combine two similarities from the first and the second step. And then we re-rank the documents according to the combined similarities and filter off non-relevant documents with the lower similarity than the threshold. Our experiments showed that our document filtering method results in a notable improvement in the retrieval effectiveness when measured using both precision-recall and F-Measure.