• 제목/요약/키워드: Consuming System

Search Result 1,074, Processing Time 0.03 seconds

Establishing Optimal Conditions for LED-Based Speed Breeding System in Soybean [Glycine max (L.) Merr.] (LED 기반 콩[Glycine max (L.) Merr.] 세대단축 시스템 구축을 위한 조건 설정)

  • Gyu Tae Park;Ji-Hyun Bae;Ju Seok Lee;Soo-Kwon Park;Dool-Yi Kim;Jung-Kyung Moon;Mi-Suk Seo
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.68 no.4
    • /
    • pp.304-312
    • /
    • 2023
  • Plant breeding is a time-consuming process, mainly due to the limited annual generational advancement. A speed breeding system, using LED light sources, has been applied to accelerate generational progression in various crops. However, detailed protocols applicable to soybeans are still insufficient. In this study, we report the optimized protocols for a speed breeding system comprising 12 soybean varieties with various maturity ecotypes. We investigated the effects of two light qualities (RGB ratio), three levels of light intensity (PPFD), and two soil conditions on the flowering time and development of soybeans. Our results showed that an increase in the red wavelength of the light spectrum led to a delay in flowering time. Furthermore, as light intensity increased, flowering time, average internode length, and plant height decreased, while the number of nodes, branches, and pods increased. When compared to agronomic soil, horticultural soil resulted in an increase of more than 50% in the number of nodes, branches, and pods. Consequently, the optimal conditions were determined as follows: a 10-hour short-day photoperiod, an equal RGB ratio (1:1:1), light intensity exceeding 1,300 PPFD, and the use of horticultural soil. Under these conditions, the average flowering time was found to be 27.3±2.48 days, with an average seed yield of 7.9±2.67. Thus, the speed breeding systems reduced the flowering time by more than 40 days, compared to the average flowering time of Korean soybean resources (approximately 70 days). By using a controlled growth chamber that is unaffected by external environmental conditions, up to 6 generations can be achieved per year. The use of LED illumination and streamlined facilities further contributes to cost savings. This study highlights the substantial potential of integrating modern crop breeding techniques, such as digital breeding and genetic editing, with generational shortening systems to accelerate crop improvement.

A New Approach to Automatic Keyword Generation Using Inverse Vector Space Model (키워드 자동 생성에 대한 새로운 접근법: 역 벡터공간모델을 이용한 키워드 할당 방법)

  • Cho, Won-Chin;Rho, Sang-Kyu;Yun, Ji-Young Agnes;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.21 no.1
    • /
    • pp.103-122
    • /
    • 2011
  • Recently, numerous documents have been made available electronically. Internet search engines and digital libraries commonly return query results containing hundreds or even thousands of documents. In this situation, it is virtually impossible for users to examine complete documents to determine whether they might be useful for them. For this reason, some on-line documents are accompanied by a list of keywords specified by the authors in an effort to guide the users by facilitating the filtering process. In this way, a set of keywords is often considered a condensed version of the whole document and therefore plays an important role for document retrieval, Web page retrieval, document clustering, summarization, text mining, and so on. Since many academic journals ask the authors to provide a list of five or six keywords on the first page of an article, keywords are most familiar in the context of journal articles. However, many other types of documents could not benefit from the use of keywords, including Web pages, email messages, news reports, magazine articles, and business papers. Although the potential benefit is large, the implementation itself is the obstacle; manually assigning keywords to all documents is a daunting task, or even impractical in that it is extremely tedious and time-consuming requiring a certain level of domain knowledge. Therefore, it is highly desirable to automate the keyword generation process. There are mainly two approaches to achieving this aim: keyword assignment approach and keyword extraction approach. Both approaches use machine learning methods and require, for training purposes, a set of documents with keywords already attached. In the former approach, there is a given set of vocabulary, and the aim is to match them to the texts. In other words, the keywords assignment approach seeks to select the words from a controlled vocabulary that best describes a document. Although this approach is domain dependent and is not easy to transfer and expand, it can generate implicit keywords that do not appear in a document. On the other hand, in the latter approach, the aim is to extract keywords with respect to their relevance in the text without prior vocabulary. In this approach, automatic keyword generation is treated as a classification task, and keywords are commonly extracted based on supervised learning techniques. Thus, keyword extraction algorithms classify candidate keywords in a document into positive or negative examples. Several systems such as Extractor and Kea were developed using keyword extraction approach. Most indicative words in a document are selected as keywords for that document and as a result, keywords extraction is limited to terms that appear in the document. Therefore, keywords extraction cannot generate implicit keywords that are not included in a document. According to the experiment results of Turney, about 64% to 90% of keywords assigned by the authors can be found in the full text of an article. Inversely, it also means that 10% to 36% of the keywords assigned by the authors do not appear in the article, which cannot be generated through keyword extraction algorithms. Our preliminary experiment result also shows that 37% of keywords assigned by the authors are not included in the full text. This is the reason why we have decided to adopt the keyword assignment approach. In this paper, we propose a new approach for automatic keyword assignment namely IVSM(Inverse Vector Space Model). The model is based on a vector space model. which is a conventional information retrieval model that represents documents and queries by vectors in a multidimensional space. IVSM generates an appropriate keyword set for a specific document by measuring the distance between the document and the keyword sets. The keyword assignment process of IVSM is as follows: (1) calculating the vector length of each keyword set based on each keyword weight; (2) preprocessing and parsing a target document that does not have keywords; (3) calculating the vector length of the target document based on the term frequency; (4) measuring the cosine similarity between each keyword set and the target document; and (5) generating keywords that have high similarity scores. Two keyword generation systems were implemented applying IVSM: IVSM system for Web-based community service and stand-alone IVSM system. Firstly, the IVSM system is implemented in a community service for sharing knowledge and opinions on current trends such as fashion, movies, social problems, and health information. The stand-alone IVSM system is dedicated to generating keywords for academic papers, and, indeed, it has been tested through a number of academic papers including those published by the Korean Association of Shipping and Logistics, the Korea Research Academy of Distribution Information, the Korea Logistics Society, the Korea Logistics Research Association, and the Korea Port Economic Association. We measured the performance of IVSM by the number of matches between the IVSM-generated keywords and the author-assigned keywords. According to our experiment, the precisions of IVSM applied to Web-based community service and academic journals were 0.75 and 0.71, respectively. The performance of both systems is much better than that of baseline systems that generate keywords based on simple probability. Also, IVSM shows comparable performance to Extractor that is a representative system of keyword extraction approach developed by Turney. As electronic documents increase, we expect that IVSM proposed in this paper can be applied to many electronic documents in Web-based community and digital library.

A Simple Method for Evaluation of Pepper Powder Color Using Vis/NIR Hyperspectral System (Vis/NIR 초분광 분석을 이용한 고춧가루 색도 간이 측정법 개발)

  • Han, Koeun;Lee, Hoonsoo;Kang, Jin-Ho;Choi, Eunah;Oh, Se-Jeong;Lee, Yong-Jik;Cho, Byoung-Kwan;Kang, Byoung-Cheorl
    • Horticultural Science & Technology
    • /
    • v.33 no.3
    • /
    • pp.403-408
    • /
    • 2015
  • Color is one of the quality determining factors for pepper powder. To measure the color of pepper powder, several methods including high-performance liquid chromatography (HPLC), thin layer chromatography (TLC), and ASTA-20 have been used. Among the methods, the ASTA-20 method is most widely used for color measurement of a large number of samples because of its simplicity and accuracy. However it requires time consuming preprocessing steps and generates chemical waste containing acetone. As an alternative, we developed a fast and simple method based on a visible/near infrared (Vis/NIR) hyperspectral method to measure the color of pepper powder. To evaluate correlation between the ASTA-20 and the visible/near infrared (Vis/NIR) hyperspectral methods, we first measured the color of a total of 488 pepper powder samples using the two methods. Then, a partial least squares (PLS) model was postulated using the color values of randomly selected 3 66 samples to predict ASTA values of unknown samples. When the ASTA values predicted by the PLS model were compared with those of the ASTA-20 method for 122 samples not used for model development, there was very high correlation between two methods ($R^2=0.88$) demonstrating reliability of Vis/NIR hyperspectral method. We believe that this simple and fast method is suitable for highthroughput screening of a large number of samples because this method does not require preprocessing steps required for the ASTA-20 method, and takes less than 30 min to measure the color of pepper powder.

Comparison of an Automated Most-Probable-Number Technique TEMPO®TVC with Traditional Plating Methods PetrifilmTM for Estimating Populations of Total Aerobic Bacteria with Livestock Products (축산물가공품에서 건조필름법과 TEMPO®TVC 검사법의 총세균수 비교분석)

  • Kim, Young-Jo;Wee, Sung-Hwan;Yoon, Ha-Chung;Heo, Eun-Jeong;Park, Hyun-Jeong;Kim, Ji-Ho;Moon, Jin-San
    • Journal of Food Hygiene and Safety
    • /
    • v.27 no.1
    • /
    • pp.103-107
    • /
    • 2012
  • We compared between an automated most-probable-number technique $TEMPO^{(R)}$TVC and traditional plating methods $Petrifilm^{TM}$ for estimating populations of total aerobic bacteria in various livestock products. 257 samples randomly selected in local retail stores and 87 samples inoculated with $E.$ $coli$ ATCC 25922, $Staphylococcus$ $aureus$ ATCC 12868 were tested in this study. The degree of agreement was estimated according to the CCFRA (Campden and Chorleywood Food Research Association Group) Guideline 29 and the agreement indicates the difference of two kinds methods is lower than 1 log base 10($log_{10}$). The samples of hams, jerky products, ground meat products, milks, ice creams, infant formulas, and egg heat formed products were showed above 95% in the agreement of methods. In contrast, proportion of agreement on meat extract products, cheeses and sausages were 93.1%, 92.1%, 89.1%, respectively. One press ham and five sausages containing spice and seasoning, two pork cutlets containing spice and bread crumbs, two meat extract product and two natural cheeses and one processing cheese with a high fat content, and one ice cream containing chocolate of all samples showed the discrepancy. Our result suggest that $TEMPO^{(R)}$TVC system is efficient to analyses total aerobic bacteria to compare manual method in time-consuming and laborious process except livestock products having limit of detection.

Text Mining-Based Emerging Trend Analysis for the Aviation Industry (항공산업 미래유망분야 선정을 위한 텍스트 마이닝 기반의 트렌드 분석)

  • Kim, Hyun-Jung;Jo, Nam-Ok;Shin, Kyung-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.65-82
    • /
    • 2015
  • Recently, there has been a surge of interest in finding core issues and analyzing emerging trends for the future. This represents efforts to devise national strategies and policies based on the selection of promising areas that can create economic and social added value. The existing studies, including those dedicated to the discovery of future promising fields, have mostly been dependent on qualitative research methods such as literature review and expert judgement. Deriving results from large amounts of information under this approach is both costly and time consuming. Efforts have been made to make up for the weaknesses of the conventional qualitative analysis approach designed to select key promising areas through discovery of future core issues and emerging trend analysis in various areas of academic research. There needs to be a paradigm shift in toward implementing qualitative research methods along with quantitative research methods like text mining in a mutually complementary manner. The change is to ensure objective and practical emerging trend analysis results based on large amounts of data. However, even such studies have had shortcoming related to their dependence on simple keywords for analysis, which makes it difficult to derive meaning from data. Besides, no study has been carried out so far to develop core issues and analyze emerging trends in special domains like the aviation industry. The change used to implement recent studies is being witnessed in various areas such as the steel industry, the information and communications technology industry, the construction industry in architectural engineering and so on. This study focused on retrieving aviation-related core issues and emerging trends from overall research papers pertaining to aviation through text mining, which is one of the big data analysis techniques. In this manner, the promising future areas for the air transport industry are selected based on objective data from aviation-related research papers. In order to compensate for the difficulties in grasping the meaning of single words in emerging trend analysis at keyword levels, this study will adopt topic analysis, which is a technique used to find out general themes latent in text document sets. The analysis will lead to the extraction of topics, which represent keyword sets, thereby discovering core issues and conducting emerging trend analysis. Based on the issues, it identified aviation-related research trends and selected the promising areas for the future. Research on core issue retrieval and emerging trend analysis for the aviation industry based on big data analysis is still in its incipient stages. So, the analysis targets for this study are restricted to data from aviation-related research papers. However, it has significance in that it prepared a quantitative analysis model for continuously monitoring the derived core issues and presenting directions regarding the areas with good prospects for the future. In the future, the scope is slated to expand to cover relevant domestic or international news articles and bidding information as well, thus increasing the reliability of analysis results. On the basis of the topic analysis results, core issues for the aviation industry will be determined. Then, emerging trend analysis for the issues will be implemented by year in order to identify the changes they undergo in time series. Through these procedures, this study aims to prepare a system for developing key promising areas for the future aviation industry as well as for ensuring rapid response. Additionally, the promising areas selected based on the aforementioned results and the analysis of pertinent policy research reports will be compared with the areas in which the actual government investments are made. The results from this comparative analysis are expected to make useful reference materials for future policy development and budget establishment.

Monitoring of a Time-series of Land Subsidence in Mexico City Using Space-based Synthetic Aperture Radar Observations (인공위성 영상레이더를 이용한 멕시코시티 시계열 지반침하 관측)

  • Ju, Jeongheon;Hong, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.6_1
    • /
    • pp.1657-1667
    • /
    • 2021
  • Anthropogenic activities and natural processes have been causes of land subsidence which is sudden sinking or gradual settlement of the earth's solid surface. Mexico City, the capital of Mexico, is one of the most severe land subsidence areas which are resulted from excessive groundwater extraction. Because groundwater is the primary water resource occupies almost 70% of total water usage in the city. Traditional terrestrial observations like the Global Navigation Satellite System (GNSS) or leveling survey have been preferred to measure land subsidence accurately. Although the GNSS observations have highly accurate information of the surfaces' displacement with a very high temporal resolution, it has often been limited due to its sparse spatial resolution and highly time-consuming and high cost. However, space-based synthetic aperture radar (SAR) interferometry has been widely used as a powerful tool to monitor surfaces' displacement with high spatial resolution and high accuracy from mm to cm-scale, regardless of day-or-night and weather conditions. In this paper, advanced interferometric approaches have been applied to get a time-series of land subsidence of Mexico City using four-year-long twenty ALOS PALSAR L-band observations acquired from Feb-11, 2007 to Feb-22, 2011. We utilized persistent scatterer interferometry (PSI) and small baseline subset (SBAS) techniques to suppress atmospheric artifacts and topography errors. The results show that the maximum subsidence rates of the PSI and SBAS method were -29.5 cm/year and -27.0 cm/year, respectively. In addition, we discuss the different subsidence rates where the study area is discriminated into three districts according to distinctive geotechnical characteristics. The significant subsidence rate occurred in the lacustrine sediments with higher compressibility than harder bedrock.

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.

Analysis of Amperometric Response to Cholesterol according to Enzyme-Immobilization Methods (효소고정화 방법에 따른 콜레스테롤 검출용 바이오센서의 전류 감응도 분석)

  • Ji, Jung-Youn;Kim, Mee-Ra
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.21 no.5
    • /
    • pp.731-738
    • /
    • 2011
  • Cholesterol is the precursor of various steroid hormones, bile acid, and vitamin D with functions related to regulation of membrane permeability and fluidity. However, the presence of excess blood cholesterol may lead to arteriosclerosis and hypertension. Moreover, dietary cholesterol may affect blood cholesterol levels. Generally, cholesterol determination is performed by spectrophotometric or chromatographic methods, but these methods are very time consuming and costly, and require complicated pretreatment. Thus, the development of a rapid and simple analysis method for measuring cholesterol concentration in food is needed. Multi-walled carbon nanotube (MWCNT) was functionalized to MWCNT-$NH_2$ via MWCNT-COOH to have high sensitivity to $H_2O_2$. The fabricated MWCNT-$NH_2$ was attached to a glassy carbon electrode (GCE), after which Prussian blue (PB) was coated onto MWCNT-$NH_2$/GCE. MWCNT-$NH_2$/PB/GCE was used as a working electrode. An Ag/AgCl electrode and Pt wire were used as a reference electrode and counter electrode, respectively. The sensitivity of the modified working electrode was determined based on the amount of current according to the concentration of $H_2O_2$. The response increased with an increase of $H_2O_2$ concentration in the range of 0.5~500 ${\mu}M$ ($r^2$=0.96) with a detection limit of 0.1 ${\mu}M$. Cholesterol oxidase was immobilized to aminopropyl glass beads, CNBr-activated sepharose, Na-alginate, and toyopearl beads. The immobilized enzyme reactors with aminopropyl glass beads and CNBr-activated sepharose showed linearity in the range of 1~100 ${\mu}M$ cholesterol. Na-alginate and toyopearl beads showed linearity in the range of 5~50 and 1~50 ${\mu}M$ cholesterol, respectively. The detection limit of all immobilized enzyme reactors was 1 ${\mu}M$. These enzyme reactors showed high sensitivity; especially, the enzyme reactors with CNBr-activated sepharose and Na-alginate indicated high coupling efficiency and sensitivity. Therefore, both of the enzyme reactors are more suitable for a cholesterol biosensor system.

Latent topics-based product reputation mining (잠재 토픽 기반의 제품 평판 마이닝)

  • Park, Sang-Min;On, Byung-Won
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.39-70
    • /
    • 2017
  • Data-drive analytics techniques have been recently applied to public surveys. Instead of simply gathering survey results or expert opinions to research the preference for a recently launched product, enterprises need a way to collect and analyze various types of online data and then accurately figure out customer preferences. In the main concept of existing data-based survey methods, the sentiment lexicon for a particular domain is first constructed by domain experts who usually judge the positive, neutral, or negative meanings of the frequently used words from the collected text documents. In order to research the preference for a particular product, the existing approach collects (1) review posts, which are related to the product, from several product review web sites; (2) extracts sentences (or phrases) in the collection after the pre-processing step such as stemming and removal of stop words is performed; (3) classifies the polarity (either positive or negative sense) of each sentence (or phrase) based on the sentiment lexicon; and (4) estimates the positive and negative ratios of the product by dividing the total numbers of the positive and negative sentences (or phrases) by the total number of the sentences (or phrases) in the collection. Furthermore, the existing approach automatically finds important sentences (or phrases) including the positive and negative meaning to/against the product. As a motivated example, given a product like Sonata made by Hyundai Motors, customers often want to see the summary note including what positive points are in the 'car design' aspect as well as what negative points are in thesame aspect. They also want to gain more useful information regarding other aspects such as 'car quality', 'car performance', and 'car service.' Such an information will enable customers to make good choice when they attempt to purchase brand-new vehicles. In addition, automobile makers will be able to figure out the preference and positive/negative points for new models on market. In the near future, the weak points of the models will be improved by the sentiment analysis. For this, the existing approach computes the sentiment score of each sentence (or phrase) and then selects top-k sentences (or phrases) with the highest positive and negative scores. However, the existing approach has several shortcomings and is limited to apply to real applications. The main disadvantages of the existing approach is as follows: (1) The main aspects (e.g., car design, quality, performance, and service) to a product (e.g., Hyundai Sonata) are not considered. Through the sentiment analysis without considering aspects, as a result, the summary note including the positive and negative ratios of the product and top-k sentences (or phrases) with the highest sentiment scores in the entire corpus is just reported to customers and car makers. This approach is not enough and main aspects of the target product need to be considered in the sentiment analysis. (2) In general, since the same word has different meanings across different domains, the sentiment lexicon which is proper to each domain needs to be constructed. The efficient way to construct the sentiment lexicon per domain is required because the sentiment lexicon construction is labor intensive and time consuming. To address the above problems, in this article, we propose a novel product reputation mining algorithm that (1) extracts topics hidden in review documents written by customers; (2) mines main aspects based on the extracted topics; (3) measures the positive and negative ratios of the product using the aspects; and (4) presents the digest in which a few important sentences with the positive and negative meanings are listed in each aspect. Unlike the existing approach, using hidden topics makes experts construct the sentimental lexicon easily and quickly. Furthermore, reinforcing topic semantics, we can improve the accuracy of the product reputation mining algorithms more largely than that of the existing approach. In the experiments, we collected large review documents to the domestic vehicles such as K5, SM5, and Avante; measured the positive and negative ratios of the three cars; showed top-k positive and negative summaries per aspect; and conducted statistical analysis. Our experimental results clearly show the effectiveness of the proposed method, compared with the existing method.