• Title/Summary/Keyword: Case Addition

Search Result 9,195, Processing Time 0.045 seconds

The Consideration of nuclear medicine technologist's occupational dose from patient who are undergoing 18F-FDG Whole body PET/CT : Aspect of specific characteristic of patient and contact time with patient (18F-FDG Whole Body PET/CT 수검자의 거리별 선량 변화에 따른 방사선 작업종사자의 유효선량 고찰: 환자 고유특성 및 응대시간 측면)

  • Kim, Sunghwan;Ryu, Jaekwang;Ko, Hyunsoo
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.22 no.1
    • /
    • pp.67-75
    • /
    • 2018
  • Purpose The purpose of this study is to investigate and analyze the external dose rates of $^{18}F-FDG$ Whole Body PET/CT patients by distance, and to identify the main factors that contribute to the reduction of radiation dose by checking the cumulative doses of nuclear medicine technologist(NMT). Materials and Methods After completion of the $^{18}F-FDG$ Whole Body PET/CT scan($75.4{\pm}3.3min$), the external dose rates of 106 patients were measured at a distance of 0, 10, 30, 50, and 100 cm from the chest. Gender, age, BMI(Body Mass Index), fasting time, diabetes mellitus, radiopharmaceutical injection information, creatine value were collected to analyze individual factors that could affect external dose rates from a patient's perspective. From the perspective of NMT, personal pocket dosimeters were worn on the chest to record accumulated dose of NMT who performed the injection task($T_1$, $T_2$ and $T_3$) and scan task($T_4$, $T_5$ and $T_6$). In addition, patient contact time with NMT was measured and analyzed. Results External dose rates from the patient for each distance were calculated as $246.9{\pm}37.6$, $129.9{\pm}16.7$, $61.2{\pm}9.1$, $34.4{\pm}5.9$, and $13.1{\pm}2.4{\mu}Sv/hr$ respectively. On the patient's aspect, there was a significant difference in the proximity of gender, BMI, Injection dose and creatine value, but the difference decreased as the distance increased. In case of dialysis patient, external dose rates for each distance were exceptionally higher than other patients. On the NMT aspect, the doses received from patients were 0.70, 1.09, $0.55{\mu}Sv/person$ for performing the injection task($T_1$, $T_2$, and $T_3$), and were 1.25, 0.82, $1.23{\mu}Sv/person$ for performing the scan task($T_4$, $T_5$, $T_6$). Conclusion we found that maintaining proper distance with patient and reducing contact time with patient had a significant effect on accumulated doses. Considering those points, efforts such as sufficient water intake and encourage of urination, maintaining the proper distance between the NMT and the patient(at least 100 cm), and reducing the contact time should be done for reducing dose rates not only patient but also NMT.

Adsorption of Arsenic onto Two-Line Ferrihydrite (비소의 Two-Line Ferrihydrite에 대한 흡착반응)

  • Jung, Young-Il;Lee, Woo-Chun;Cho, Hyen-Goo;Yun, Seong-Taek;Kim, Soon-Oh
    • Journal of the Mineralogical Society of Korea
    • /
    • v.21 no.3
    • /
    • pp.227-237
    • /
    • 2008
  • Arsenic has recently become of the most serious environmental concerns, and the worldwide regulation of arsenic fur drinking water has been reinforced. Arsenic contaminated groundwater and soil have been frequently revealed as well, and arsenic contamination and its treatment and measures have been domestically raised as one of the most important environmental issues. Arsenic behavior in geo-environment is principally affected by oxides and clay minerals, and particularly iron (oxy)hydroxides have been well known to be most effective in controlling arsenic. Among a number of iron (oxy)hydroxides, for this reason, 2-line ferrihydrite was selected in this study to investigate its effect on arsenic behavior. Adsorption of 2-line ferrihydrite was characterized and compared between As(III) and As(V) which are known to be the most ubiquitous species among arsenic forms in natural environment. Two-line ferrihydrite synthesized in the lab as the adsorbent of arsenic had $10\sim200$ nm for diameter, $247m^{2}/g$ for specific surface area, and 8.2 for pH of zero charge, and those representative properties of 2-line ferrihydrite appeared to be greatly suitable to be used as adsorbent of arsenic. The experimental results on equilibrium adsorption indicate that As(III) showed much stronger adsorption affinity onto 2-line ferrihydrite than As(V). In addition, the maximum adsorptions of As(III) and As(V) were observed at pH 7.0 and 2.0, respectively. In particular, the adsorption of As(III) did not show any difference between pH conditions, except for pH 12.2. On the contrary, the As(V) adsorption was remarkably decreased with increase in pH. The results obtained from the detailed experiments investigating pH effect on arsenic adsorption show that As(III) adsorption increased up to pH 8.0 and dramatically decreased above pH 9.2. In case of As(V), its adsorption steadily decreased with increase in pH. The reason the adsorption characteristics became totally different depending on arsenic species is attributed to the fact that chemical speciation of arsenic and surface charge of 2-line ferrihydrite are significantly affected by pH, and it is speculated that those composite phenomena cause the difference in adsorption between As(III) and As(V). From the view point of adsorption kinetics, adsorption of arsenic species onto 2-line ferrihydrite was investigated to be mostly completed within the duration of 2 hours. Among the kinetic models proposed so for, power function and elovich model were evaluated to be the most suitable ones which can simulate adsorption kinetics of two kinds of arsenic species onto 2-line ferrihydrite.

A Ranking Algorithm for Semantic Web Resources: A Class-oriented Approach (시맨틱 웹 자원의 랭킹을 위한 알고리즘: 클래스중심 접근방법)

  • Rho, Sang-Kyu;Park, Hyun-Jung;Park, Jin-Soo
    • Asia pacific journal of information systems
    • /
    • v.17 no.4
    • /
    • pp.31-59
    • /
    • 2007
  • We frequently use search engines to find relevant information in the Web but still end up with too much information. In order to solve this problem of information overload, ranking algorithms have been applied to various domains. As more information will be available in the future, effectively and efficiently ranking search results will become more critical. In this paper, we propose a ranking algorithm for the Semantic Web resources, specifically RDF resources. Traditionally, the importance of a particular Web page is estimated based on the number of key words found in the page, which is subject to manipulation. In contrast, link analysis methods such as Google's PageRank capitalize on the information which is inherent in the link structure of the Web graph. PageRank considers a certain page highly important if it is referred to by many other pages. The degree of the importance also increases if the importance of the referring pages is high. Kleinberg's algorithm is another link-structure based ranking algorithm for Web pages. Unlike PageRank, Kleinberg's algorithm utilizes two kinds of scores: the authority score and the hub score. If a page has a high authority score, it is an authority on a given topic and many pages refer to it. A page with a high hub score links to many authoritative pages. As mentioned above, the link-structure based ranking method has been playing an essential role in World Wide Web(WWW), and nowadays, many people recognize the effectiveness and efficiency of it. On the other hand, as Resource Description Framework(RDF) data model forms the foundation of the Semantic Web, any information in the Semantic Web can be expressed with RDF graph, making the ranking algorithm for RDF knowledge bases greatly important. The RDF graph consists of nodes and directional links similar to the Web graph. As a result, the link-structure based ranking method seems to be highly applicable to ranking the Semantic Web resources. However, the information space of the Semantic Web is more complex than that of WWW. For instance, WWW can be considered as one huge class, i.e., a collection of Web pages, which has only a recursive property, i.e., a 'refers to' property corresponding to the hyperlinks. However, the Semantic Web encompasses various kinds of classes and properties, and consequently, ranking methods used in WWW should be modified to reflect the complexity of the information space in the Semantic Web. Previous research addressed the ranking problem of query results retrieved from RDF knowledge bases. Mukherjea and Bamba modified Kleinberg's algorithm in order to apply their algorithm to rank the Semantic Web resources. They defined the objectivity score and the subjectivity score of a resource, which correspond to the authority score and the hub score of Kleinberg's, respectively. They concentrated on the diversity of properties and introduced property weights to control the influence of a resource on another resource depending on the characteristic of the property linking the two resources. A node with a high objectivity score becomes the object of many RDF triples, and a node with a high subjectivity score becomes the subject of many RDF triples. They developed several kinds of Semantic Web systems in order to validate their technique and showed some experimental results verifying the applicability of their method to the Semantic Web. Despite their efforts, however, there remained some limitations which they reported in their paper. First, their algorithm is useful only when a Semantic Web system represents most of the knowledge pertaining to a certain domain. In other words, the ratio of links to nodes should be high, or overall resources should be described in detail, to a certain degree for their algorithm to properly work. Second, a Tightly-Knit Community(TKC) effect, the phenomenon that pages which are less important but yet densely connected have higher scores than the ones that are more important but sparsely connected, remains as problematic. Third, a resource may have a high score, not because it is actually important, but simply because it is very common and as a consequence it has many links pointing to it. In this paper, we examine such ranking problems from a novel perspective and propose a new algorithm which can solve the problems under the previous studies. Our proposed method is based on a class-oriented approach. In contrast to the predicate-oriented approach entertained by the previous research, a user, under our approach, determines the weights of a property by comparing its relative significance to the other properties when evaluating the importance of resources in a specific class. This approach stems from the idea that most queries are supposed to find resources belonging to the same class in the Semantic Web, which consists of many heterogeneous classes in RDF Schema. This approach closely reflects the way that people, in the real world, evaluate something, and will turn out to be superior to the predicate-oriented approach for the Semantic Web. Our proposed algorithm can resolve the TKC(Tightly Knit Community) effect, and further can shed lights on other limitations posed by the previous research. In addition, we propose two ways to incorporate data-type properties which have not been employed even in the case when they have some significance on the resource importance. We designed an experiment to show the effectiveness of our proposed algorithm and the validity of ranking results, which was not tried ever in previous research. We also conducted a comprehensive mathematical analysis, which was overlooked in previous research. The mathematical analysis enabled us to simplify the calculation procedure. Finally, we summarize our experimental results and discuss further research issues.

The Comparative Study of on Pump CABG during Pulsatile $(T-PLS^{TM})$ and Nonpulsatile $(Bio-pump^{TM})$ Perfusion (관상동맥우회술 시 사용된 박동성펌프$(T-PLS^{TM})$와 비박동성펌프$(Bio-pump^{TM})$의 비교연구)

  • Park Young-Woo;Her Keun;Lim Jae-Ung;Shin Hwa-Kyun;Won Yong-Soon
    • Journal of Chest Surgery
    • /
    • v.39 no.5 s.262
    • /
    • pp.354-358
    • /
    • 2006
  • Background: Pulsatile pumps for extracorporeal circulation have been known to be better for tissue perfusion than non-pulsatile pumps but be detrimental to blood corpuscles. This study is intended to examine the risks and benefits of $T-PLS^{TM}$ through the comparison of clinical effects of $T-PLS^{TM}$ (pulsatile pump) and $Bio-pump^{TM}$ (non-pulsatile pump) used for coronary bypass surgery. Material and Method: The comparison was made on 40 patients who had coronary bypass using $T-PLS^{TM}\;and\;Bio-pump^{TM}$ (20 patients for each) from April 2003 to June 2005. All of the surgeries were operated on pump beating coronary artery bypass graft using cardiopulmonary extra-corporeal circulation. Risk factors before surgery and the condition during surgery and the results were compared. Result: There was no significant difference in age, gender ratio, and risk factors before surgery such as history of diabetes, hypertension, smoking, obstructive pulmonary disease, coronary infarction, and renal failure between the two groups. Surgery duration, hours of heart-lung machine operation, used shunt and grafted coronary branch were little different between the two groups. The two groups had a similar level of systolic arterial pressure, diastolic arterial pressure and mean arterial pressure, but pulse pressure was measured higher in the group with $T-PLS^{TM}\;(46{\pm}15\;mmHg\;in\;T-PLS^{TM}\;vs\;35{\pm}13\;mmHg\;in\;Bio-pump^{TM},\;p<0.05)$. The $T-PLS^{TM}$-operated patients tended to produce more urine volume during surgery, but the difference was not statistically significant $(9.7{\pm}3.9\;cc/min\;in\;T-PLS^{TM}\;vs\;8.9{\pm}3.6\;cc/min\;in\;Bio-pump^{TM},\;p=0.20)$. There was no significant difference in mean duration of respirator usage and 24-hour blood loss after surgery between the two groups. Plasma free Hb was measured lower in the group with $T-PLS^{TM}\;(24.5{\pm}21.7\;mg/dL\;in\;T-PLS^{TM}\;versus\;46.8{\pm}23.0mg/dL\;in\;Bio-pump^{TM},\;p<0.05)$. There was no significant difference in coronary infarction, arrhythmia, renal failure and morbidity rate of cerebrovascular disease. There was a case of death after surgery (death rate of 5%) in the group tested with $T-PLS^{TM}$, but the death rate was not statistically significant. Conclusion: Coronary bypass was operated with $T-PLS^{TM}$ (Pulsatile flow pump) using a heart-lung machine. There was no unexpected event caused by mechanical error during surgery, and the clinical process of the surgery was the same as the surgery for which $Bio-pump^{TM}$ was used. In addition, $T-PLS^{TM}$ used surgery was found to be less detrimental to blood corpuscles than the pulsatile flow has been known to be. Authors of this study could confirm the safety of $T-PLS^{TM}$.

Evaluating efficiency of Split VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes (골반 림프선을 포함한 전립선암 치료 시 Split VMAT plan의 유용성 평가)

  • Mun, Jun Ki;Son, Sang Jun;Kim, Dae Ho;Seo, Seok Jin
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.27 no.2
    • /
    • pp.145-156
    • /
    • 2015
  • Purpose : The purpose of this study is to evaluate the efficiency of Split VMAT planning(Contouring rectum divided into an upper and a lower for reduce rectum dose) compare to Conventional VMAT planning(Contouring whole rectum) for prostate cancer radiotherapy involving pelvic lymph nodes. Materials and Methods : A total of 9 cases were enrolled. Each case received radiotherapy with Split VMAT planning to the prostate involving pelvic lymph nodes. Treatment was delivered using TrueBeam STX(Varian Medical Systems, USA) and planned on Eclipse(Ver. 10.0.42, Varian, USA), PRO3(Progressive Resolution Optimizer 10.0.28), AAA(Anisotropic Analytic Algorithm Ver. 10.0.28). Lower rectum contour was defined as starting 1cm superior and ending 1cm inferior to the prostate PTV, upper rectum is a part, except lower rectum from the whole rectum. Split VMAT plan parameters consisted of 10MV coplanar $360^{\circ}$ arcs. Each arc had $30^{\circ}$ and $30^{\circ}$ collimator angle, respectively. An SIB(Simultaneous Integrated Boost) treatment prescription was employed delivering 50.4Gy to pelvic lymph nodes and 63~70Gy to the prostate in 28 fractions. $D_{mean}$ of whole rectum on Split VMAT plan was applied for DVC(Dose Volume Constraint) of the whole rectum for Conventional VMAT plan. In addition, all parameters were set to be the same of existing treatment plans. To minimize the dose difference that shows up randomly on optimizing, all plans were optimized and calculated twice respectively using a 0.2cm grid. All plans were normalized to the prostate $PTV_{100%}$ = 90% or 95%. A comparison of $D_{mean}$ of whole rectum, upperr ectum, lower rectum, and bladder, $V_{50%}$ of upper rectum, total MU and H.I.(Homogeneity Index) and C.I.(Conformity Index) of the PTV was used for technique evaluation. All Split VMAT plans were verified by gamma test with portal dosimetry using EPID. Results : Using DVH analysis, a difference between the Conventional and the Split VMAT plans was demonstrated. The Split VMAT plan demonstrated better in the $D_{mean}$ of whole rectum, Up to 134.4 cGy, at least 43.5 cGy, the average difference was 75.6 cGy and in the $D_{mean}$ of upper rectum, Up to 1113.5 cGy, at least 87.2 cGy, the average difference was 550.5 cGy and in the $D_{mean}$ of lower rectum, Up to 100.5 cGy, at least -34.6 cGy, the average difference was 34.3 cGy and in the $D_{mean}$ of bladder, Up to 271 cGy, at least -55.5 cGy, the average difference was 117.8 cGy and in $V_{50%}$ of upper rectum, Up to 63.4%, at least 3.2%, the average difference was 23.2%. There was no significant difference on H.I., and C.I. of the PTV among two plans. The Split VMAT plan is average 77 MU more than another. All IMRT verification gamma test results for the Split VMAT plan passed over 90.0% at 2 mm / 2%. Conclusion : As a result, the Split VMAT plan appeared to be more favorable in most cases than the Conventional VMAT plan for prostate cancer radiotherapy involving pelvic lymph nodes. By using the split VMAT planning technique it was possible to reduce the upper rectum dose, thus reducing whole rectal dose when compared to conventional VMAT planning. Also using the split VMAT planning technique increase the treatment efficiency.

  • PDF

A Study on the Distribution Status and Management Measures of Naturalized Plants Growing in Seongeup Folk Village, Jeju Island (제주 성읍민속마을의 귀화식물 분포현황 및 관리방안)

  • Rho, Jae-Hyun;Oh, Hyun-Kyung;Han, Yun-Hee;Choi, Yung-Hyun;Byun, Mu-Sup;Kim, Young-Suk;Lee, Won-Ho
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.32 no.1
    • /
    • pp.107-119
    • /
    • 2014
  • The purpose of this study is to examine the current status of vascular plants and naturalized plants growing in the Seongeup Folk Village in Jeju and to consider and compare their distribution patterns and the characteristics of emergence of naturalized plants in other folk villages and all parts of Jeju, thereby exploring measures to well manage naturalized plants. The result of this study is as follows.11) The total number of vascular plants growing in Seongeup Folk Village is identified to be 354 taxa which include 93 families, 260 genus, 298 species, 44 varieties and 12 breeds. Among them, the number of naturalized plants is 55 taxa in total including 22 families, 46 genus, 53 species, and 2 varieties, which accounts for 21.7% of the total of 254 taxa identified all over the region of Jeju. The rate of naturalization in Seongeup Folk Village is 15.5%, which is far higher than the rates of plant naturalization in Hahoi Village in Andong, Yangdong Village in Gyeongju, Hangae Village in Seongju, Wanggok Village in Goseong, and Oeam Village in Asan. Among the naturalized plants identified within the targeted villages, the number of those growing in Jeju is 9 taxa including Silene gallica, Modiola caroliniana, Oenothera laciniata, Oenothera stricta, Apium leptophyllum, Gnaphalium purpureum, Gnaphalium calviceps, Paspalum dilatatum and Sisyrinchium angustifolium. It is suggested that appropriate management measures that consider the characteristics of the gateway to import and the birthplace of the naturalized plants are necessary. In the meantime, 3 more taxa that have not been included in the reference list of Jeju have been identified for the first time in Seongeup Folk Village, which include Bromus sterilis, Cannabis sativa and Veronica hederaefolia. The number of naturalized plants identified within the gardens of unit-based cultural properties is 20 taxa, among which the rate of prevalence of Cerastium glomeratum is the highest at 62.5%. On the other hand, the communities of plants that require landscape management are Brassica napus and other naturalized plants, including Cosmos bipinnatus, Trifolium repens, Medicago lupulina, Oenothera stricta, O. laciniata, Lotus corniculatus, Lolium perenne, Silene gallica, Hypochaeris radicata, Plantago virginica, Bromus catharticus and Cerastium glomeratum. As a short-term measure to manage naturalized plants growing in Seongeup Folk Village, it is important to identify the current status of Cosmos bipinnatus and Brassica napus that have been planted for landscape agriculture, and explore how to use flowers during the blooming season. It is suggested that Ambrosia artemisiifolia and Hypochaeris radicata, designated as invasive alien plants by the Ministry of Health and Welfare, should be eradicated initially, followed by regular monitoring in case of further invasion, spread or expansion. As for Hypochaeris radicata, in particular, some physical prevention measures need to be explored, such as for example, identifying the habitat density and eradication of the plant. In addition, it is urgent to remove plants, such as Sonchus oleraceus, Houttuynia cordata, Crassocephalum crepidioides, Erigeron annuus and Lamium purpureum with high index of greenness visually, growing wild at around high Jeongyi town walls. At the same time, as the distribution and dominance value of the naturalized plants growing in deserted or empty houses are high, it is necessary to find measures to preserve and manage them and to use the houses as lodging places.

Improved Social Network Analysis Method in SNS (SNS에서의 개선된 소셜 네트워크 분석 방법)

  • Sohn, Jong-Soo;Cho, Soo-Whan;Kwon, Kyung-Lag;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.4
    • /
    • pp.117-127
    • /
    • 2012
  • Due to the recent expansion of the Web 2.0 -based services, along with the widespread of smartphones, online social network services are being popularized among users. Online social network services are the online community services which enable users to communicate each other, share information and expand human relationships. In the social network services, each relation between users is represented by a graph consisting of nodes and links. As the users of online social network services are increasing rapidly, the SNS are actively utilized in enterprise marketing, analysis of social phenomenon and so on. Social Network Analysis (SNA) is the systematic way to analyze social relationships among the members of the social network using the network theory. In general social network theory consists of nodes and arcs, and it is often depicted in a social network diagram. In a social network diagram, nodes represent individual actors within the network and arcs represent relationships between the nodes. With SNA, we can measure relationships among the people such as degree of intimacy, intensity of connection and classification of the groups. Ever since Social Networking Services (SNS) have drawn increasing attention from millions of users, numerous researches have made to analyze their user relationships and messages. There are typical representative SNA methods: degree centrality, betweenness centrality and closeness centrality. In the degree of centrality analysis, the shortest path between nodes is not considered. However, it is used as a crucial factor in betweenness centrality, closeness centrality and other SNA methods. In previous researches in SNA, the computation time was not too expensive since the size of social network was small. Unfortunately, most SNA methods require significant time to process relevant data, and it makes difficult to apply the ever increasing SNS data in social network studies. For instance, if the number of nodes in online social network is n, the maximum number of link in social network is n(n-1)/2. It means that it is too expensive to analyze the social network, for example, if the number of nodes is 10,000 the number of links is 49,995,000. Therefore, we propose a heuristic-based method for finding the shortest path among users in the SNS user graph. Through the shortest path finding method, we will show how efficient our proposed approach may be by conducting betweenness centrality analysis and closeness centrality analysis, both of which are widely used in social network studies. Moreover, we devised an enhanced method with addition of best-first-search method and preprocessing step for the reduction of computation time and rapid search of the shortest paths in a huge size of online social network. Best-first-search method finds the shortest path heuristically, which generalizes human experiences. As large number of links is shared by only a few nodes in online social networks, most nods have relatively few connections. As a result, a node with multiple connections functions as a hub node. When searching for a particular node, looking for users with numerous links instead of searching all users indiscriminately has a better chance of finding the desired node more quickly. In this paper, we employ the degree of user node vn as heuristic evaluation function in a graph G = (N, E), where N is a set of vertices, and E is a set of links between two different nodes. As the heuristic evaluation function is used, the worst case could happen when the target node is situated in the bottom of skewed tree. In order to remove such a target node, the preprocessing step is conducted. Next, we find the shortest path between two nodes in social network efficiently and then analyze the social network. For the verification of the proposed method, we crawled 160,000 people from online and then constructed social network. Then we compared with previous methods, which are best-first-search and breath-first-search, in time for searching and analyzing. The suggested method takes 240 seconds to search nodes where breath-first-search based method takes 1,781 seconds (7.4 times faster). Moreover, for social network analysis, the suggested method is 6.8 times and 1.8 times faster than betweenness centrality analysis and closeness centrality analysis, respectively. The proposed method in this paper shows the possibility to analyze a large size of social network with the better performance in time. As a result, our method would improve the efficiency of social network analysis, making it particularly useful in studying social trends or phenomena.

Product Evaluation Criteria Extraction through Online Review Analysis: Using LDA and k-Nearest Neighbor Approach (온라인 리뷰 분석을 통한 상품 평가 기준 추출: LDA 및 k-최근접 이웃 접근법을 활용하여)

  • Lee, Ji Hyeon;Jung, Sang Hyung;Kim, Jun Ho;Min, Eun Joo;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.97-117
    • /
    • 2020
  • Product evaluation criteria is an indicator describing attributes or values of products, which enable users or manufacturers measure and understand the products. When companies analyze their products or compare them with competitors, appropriate criteria must be selected for objective evaluation. The criteria should show the features of products that consumers considered when they purchased, used and evaluated the products. However, current evaluation criteria do not reflect different consumers' opinion from product to product. Previous studies tried to used online reviews from e-commerce sites that reflect consumer opinions to extract the features and topics of products and use them as evaluation criteria. However, there is still a limit that they produce irrelevant criteria to products due to extracted or improper words are not refined. To overcome this limitation, this research suggests LDA-k-NN model which extracts possible criteria words from online reviews by using LDA and refines them with k-nearest neighbor. Proposed approach starts with preparation phase, which is constructed with 6 steps. At first, it collects review data from e-commerce websites. Most e-commerce websites classify their selling items by high-level, middle-level, and low-level categories. Review data for preparation phase are gathered from each middle-level category and collapsed later, which is to present single high-level category. Next, nouns, adjectives, adverbs, and verbs are extracted from reviews by getting part of speech information using morpheme analysis module. After preprocessing, words per each topic from review are shown with LDA and only nouns in topic words are chosen as potential words for criteria. Then, words are tagged based on possibility of criteria for each middle-level category. Next, every tagged word is vectorized by pre-trained word embedding model. Finally, k-nearest neighbor case-based approach is used to classify each word with tags. After setting up preparation phase, criteria extraction phase is conducted with low-level categories. This phase starts with crawling reviews in the corresponding low-level category. Same preprocessing as preparation phase is conducted using morpheme analysis module and LDA. Possible criteria words are extracted by getting nouns from the data and vectorized by pre-trained word embedding model. Finally, evaluation criteria are extracted by refining possible criteria words using k-nearest neighbor approach and reference proportion of each word in the words set. To evaluate the performance of the proposed model, an experiment was conducted with review on '11st', one of the biggest e-commerce companies in Korea. Review data were from 'Electronics/Digital' section, one of high-level categories in 11st. For performance evaluation of suggested model, three other models were used for comparing with the suggested model; actual criteria of 11st, a model that extracts nouns by morpheme analysis module and refines them according to word frequency, and a model that extracts nouns from LDA topics and refines them by word frequency. The performance evaluation was set to predict evaluation criteria of 10 low-level categories with the suggested model and 3 models above. Criteria words extracted from each model were combined into a single words set and it was used for survey questionnaires. In the survey, respondents chose every item they consider as appropriate criteria for each category. Each model got its score when chosen words were extracted from that model. The suggested model had higher scores than other models in 8 out of 10 low-level categories. By conducting paired t-tests on scores of each model, we confirmed that the suggested model shows better performance in 26 tests out of 30. In addition, the suggested model was the best model in terms of accuracy. This research proposes evaluation criteria extracting method that combines topic extraction using LDA and refinement with k-nearest neighbor approach. This method overcomes the limits of previous dictionary-based models and frequency-based refinement models. This study can contribute to improve review analysis for deriving business insights in e-commerce market.

Herbicidal Phytotoxicity under Adverse Environments and Countermeasures (불량환경하(不良環境下)에서의 제초제(除草劑) 약해(藥害)와 경감기술(輕減技術))

  • Kwon, Y.W.;Hwang, H.S.;Kang, B.H.
    • Korean Journal of Weed Science
    • /
    • v.13 no.4
    • /
    • pp.210-233
    • /
    • 1993
  • The herbicide has become indispensable as much as nitrogen fertilizer in Korean agriculture from 1970 onwards. It is estimated that in 1991 more than 40 herbicides were registered for rice crop and treated to an area 1.41 times the rice acreage ; more than 30 herbicides were registered for field crops and treated to 89% of the crop area ; the treatment acreage of 3 non-selective foliar-applied herbicides reached 2,555 thousand hectares. During the last 25 years herbicides have benefited the Korean farmers substantially in labor, cost and time of farming. Any herbicide which causes crop injury in ordinary uses is not allowed to register in most country. Herbicides, however, can cause crop injury more or less when they are misused, abused or used under adverse environments. The herbicide use more than 100% of crop acreage means an increased probability of which herbicides are used wrong or under adverse situation. This is true as evidenced by that about 25% of farmers have experienced the herbicide caused crop injury more than once during last 10 years on authors' nationwide surveys in 1992 and 1993 ; one-half of the injury incidences were with crop yield loss greater than 10%. Crop injury caused by herbicide had not occurred to a serious extent in the 1960s when the herbicides fewer than 5 were used by farmers to the field less than 12% of total acreage. Farmers ascribed about 53% of the herbicidal injury incidences at their fields to their misuses such as overdose, careless or improper application, off-time application or wrong choice of the herbicide, etc. While 47% of the incidences were mainly due to adverse natural conditions. Such misuses can be reduced to a minimum through enhanced education/extension services for right uses and, although undesirable, increased farmers' experiences of phytotoxicity. The most difficult primary problem arises from lack of countermeasures for farmers to cope with various adverse environmental conditions. At present almost all the herbicides have"Do not use!" instructions on label to avoid crop injury under adverse environments. These "Do not use!" situations Include sandy, highly percolating, or infertile soils, cool water gushing paddy, poorly draining paddy, terraced paddy, too wet or dry soils, days of abnormally cool or high air temperature, etc. Meanwhile, the cultivated lands are under poor conditions : the average organic matter content ranges 2.5 to 2.8% in paddy soil and 2.0 to 2.6% in upland soil ; the canon exchange capacity ranges 8 to 12 m.e. ; approximately 43% of paddy and 56% of upland are of sandy to sandy gravel soil ; only 42% of paddy and 16% of upland fields are on flat land. The present situation would mean that about 40 to 50% of soil applied herbicides are used on the field where the label instructs "Do not use!". Yet no positive effort has been made for 25 years long by government or companies to develop countermeasures. It is a really sophisticated social problem. In the 1960s and 1970s a subside program to incoporate hillside red clayish soil into sandy paddy as well as campaign for increased application of compost to the field had been operating. Yet majority of the sandy soils remains sandy and the program and campaign had been stopped. With regard to this sandy soil problem the authors have developed a method of "split application of a herbicide onto sandy soil field". A model case study has been carried out with success and is introduced with key procedure in this paper. Climate is variable in its nature. Among the climatic components sudden fall or rise in temperature is hardly avoidable for a crop plant. Our spring air temperature fluctuates so much ; for example, the daily mean air temperature of Inchon city varied from 6.31 to $16.81^{\circ}C$ on April 20, early seeding time of crops, within${\times}$2Sd range of 30 year records. Seeding early in season means an increased liability to phytotoxicity, and this will be more evident in direct water-seeding of rice. About 20% of farmers depend on the cold underground-water pumped for rice irrigation. If the well is deep over 70m, the fresh water may be about $10^{\circ}C$ cold. The water should be warmed to about $20^{\circ}C$ before irrigation. This is not so practiced well by farmers. In addition to the forementioned adverse conditions there exist many other aspects to be amended. Among them the worst for liquid spray type herbicides is almost total lacking in proper knowledge of nozzle types and concern with even spray by the administrative, rural extension officers, company and farmers. Even not available in the market are the nozzles and sprayers appropriate for herbicides spray. Most people perceive all the pesticide sprayers same and concern much with the speed and easiness of spray, not with correct spray. There exist many points to be improved to minimize herbicidal phytotoxicity in Korea and many ways to achieve the goal. First of all it is suggested that 1) the present evaluation of a new herbicide at standard and double doses in registration trials is to be an evaluation for standard, double and triple doses to exploit the response slope in making decision for approval and recommendation of different dose for different situation on label, 2) the government is to recognize the facts and nature of the present problem to correct the present misperceptions and to develop an appropriate national program for improvement of soil conditions, spray equipment, extention manpower and services, 3) the researchers are to enhance researches on the countermeasures and 4) the herbicide makers/dealers are to correct their misperceptions and policy for sales, to develop database on the detailed use conditions of consumer one by one and to serve the consumers with direct counsel based on the database.

  • PDF

The Concentration of Economic Power in Korea (경제력집중(經濟力集中) : 기본시각(基本視角)과 정책방향(政策方向))

  • Lee, Kyu-uck
    • KDI Journal of Economic Policy
    • /
    • v.12 no.1
    • /
    • pp.31-68
    • /
    • 1990
  • The concentration of economic power takes the form of one or a few firms controlling a substantial portion of the economic resources and means in a certain economic area. At the same time, to the extent that these firms are owned by a few individuals, resource allocation can be manipulated by them rather than by the impersonal market mechanism. This will impair allocative efficiency, run counter to a decentralized market system and hamper the equitable distribution of wealth. Viewed from the historical evolution of Western capitalism in general, the concentration of economic power is a paradox in that it is a product of the free market system itself. The economic principle of natural discrimination works so that a few big firms preempt scarce resources and market opportunities. Prominent historical examples include trusts in America, Konzern in Germany and Zaibatsu in Japan in the early twentieth century. In other words, the concentration of economic power is the outcome as well as the antithesis of free competition. As long as judgment of the economic system at large depends upon the value systems of individuals, therefore, the issue of how to evaluate the concentration of economic power will inevitably be tinged with ideology. We have witnessed several different approaches to this problem such as communism, fascism and revised capitalism, and the last one seems to be the only surviving alternative. The concentration of economic power in Korea can be summarily represented by the "jaebol," namely, the conglomerate business group, the majority of whose member firms are monopolistic or oligopolistic in their respective markets and are owned by particular individuals. The jaebol has many dimensions in its size, but to sketch its magnitude, the share of the jaebol in the manufacturing sector reached 37.3% in shipment and 17.6% in employment as of 1989. The concentration of economic power can be ascribed to a number of causes. In the early stages of economic development, when the market system is immature, entrepreneurship must fill the gap inherent in the market in addition to performing its customary managerial function. Entrepreneurship of this sort is a scarce resource and becomes even more valuable as the target rate of economic growth gets higher. Entrepreneurship can neither be readily obtained in the market nor exhausted despite repeated use. Because of these peculiarities, economic power is bound to be concentrated in the hands of a few entrepreneurs and their business groups. It goes without saying, however, that the issue of whether the full exercise of money-making entrepreneurship is compatible with social mores is a different matter entirely. The rapidity of the concentration of economic power can also be traced to the diversification of business groups. The transplantation of advanced technology oriented toward mass production tends to saturate the small domestic market quite early and allows a firm to expand into new markets by making use of excess capacity and of monopoly profits. One of the reasons why the jaebol issue has become so acute in Korea lies in the nature of the government-business relationship. The Korean government has set economic development as its foremost national goal and, since then, has intervened profoundly in the private sector. Since most strategic industries promoted by the government required a huge capacity in technology, capital and manpower, big firms were favored over smaller firms, and the benefits of industrial policy naturally accrued to large business groups. The concentration of economic power which occured along the way was, therefore, not necessarily a product of the market system. At the same time, the concentration of ownership in business groups has been left largely intact as they have customarily met capital requirements by means of debt. The real advantage enjoyed by large business groups lies in synergy due to multiplant and multiproduct production. Even these effects, however, cannot always be considered socially optimal, as they offer disadvantages to other independent firms-for example, by foreclosing their markets. Moreover their fictitious or artificial advantages only aggravate the popular perception that most business groups have accumulated their wealth at the expense of the general public and under the behest of the government. Since Korea stands now at the threshold of establishing a full-fledged market economy along with political democracy, the phenomenon called the concentration of economic power must be correctly understood and the roles of business groups must be accordingly redefined. In doing so, we would do better to take a closer look at Japan which has experienced a demise of family-controlled Zaibatsu and a success with business groups(Kigyoshudan) whose ownership is dispersed among many firms and ultimately among the general public. The Japanese case cannot be an ideal model, but at least it gives us a good point of departure in that the issue of ownership is at the heart of the matter. In setting the basic direction of public policy aimed at controlling the concentration of economic power, one must harmonize efficiency and equity. Firm size in itself is not a problem, if it is dictated by efficiency considerations and if the firm behaves competitively in the market. As long as entrepreneurship is required for continuous economic growth and there is a discrepancy in entrepreneurial capacity among individuals, a concentration of economic power is bound to take place to some degree. Hence, the most effective way of reducing the inefficiency of business groups may be to impose competitive pressure on their activities. Concurrently, unless the concentration of ownership in business groups is scaled down, the seed of social discontent will still remain. Nevertheless, the dispersion of ownership requires a number of preconditions and, consequently, we must make consistent, long-term efforts on many fronts. We can suggest a long list of policy measures specifically designed to control the concentration of economic power. Whatever the policy may be, however, its intended effects will not be fully realized unless business groups abide by the moral code expected of socially responsible entrepreneurs. This is especially true, since the root of the problem of the excessive concentration of economic power lies outside the issue of efficiency, in problems concerning distribution, equity, and social justice.

  • PDF