• Title/Summary/Keyword: AN

Search Result 407,901, Processing Time 0.323 seconds

Pharmacokinetic Profiles of Isoniazid and Rifampicin in Korean Tuberculosis Patients (한국인 결핵환자에서 Isoniazid와 Rifampicin의 약동학)

  • Ahn, Seok-Jin;Park, Sang-Joon;Kang, Kyeong-Woo;Suh, Gee-Young;Chung, Man-Pyo;Kim, Ho-Joong;Kwon, O-Jung;Rhee, Chong-H.;Cha, Hee-Soo;Kim, Myoung-Min;Choi, Kyung-Eob
    • Tuberculosis and Respiratory Diseases
    • /
    • v.47 no.4
    • /
    • pp.442-450
    • /
    • 1999
  • Background : Isoniazid(INH) and rifampicin(RFP) are the most effective anti-tuberculosis drugs which make the short-course chemotherapy possible. Although prescribed dosages of INH and RFP in Korea are different from those recommended by American Thoracic Society, there has been few study about pharmacokinetic profiles of INH and RFP in Korean patients who receive INH, RFP, ethambutol(EMB) and pyrazinamide(PZA) simultaneously. Methods : Among the patients with active tuberculosis from Dec. 1997 to July 1998, we selected 17 patients. After an overnight fast, patients were given INH 300mg, RFP 450mg, EMB 800mg and PZA 1500mg daily. Blood samples for the measurement of plasma INH(n=15) and RFP(n=17) level were drawn each at 0, 0.5, 1, 1.5, 2, 4, 6, 8 and 12hrs, and urine was also collected. INH and RFP level in the plasma and the urine were measured by high-performance liquid chromatography(HPLC). Pharmacokinetic parameters such as peak serum concentration(Cmax), time to reach to peak serum concentration(Tmax), half-life, elimination rate constant(Ke), total body clearance(CLtot), nonrenal clearance(CLnr), and renal clearance(CLr) were calculated. Results : 1) Pharmacokinetic parameters of INH were as follows: Cmax; $7.63{\pm}3.20{\mu}g/ml$, Tmax; $0.73{\pm}0.22hr$, half-life; $2.12{\pm}0.84hrs$, Ke; $0.83{\pm}0.15hrs^{-1}$, CLtot; $17.54{\pm}8.89L/hr$, CLnr; $14.74{\pm}8.35L/hr$, CLr; $2.79{\pm}1.31L/hr$. 2) Pharmacokinetic parameters of RFP were as follows: Cmax; $8.93{\pm}3.98{\mu}g/ml$, Tmax; $1.76{\pm}1.13hrs$, half-life; $2.27{\pm}0.54hrs$, Ke; $0.32{\pm}0.08hrs^{-1}$, CLtot; $14.63{\pm}6.60L/hr$, CLr; $1.04{\pm}0.55L/hr$, CLnr; $13.59{\pm}6.21L/hr$. 3) While the correlation between body weight and Cmax of INH was not statistically significant (r=-0.514, p value>0.05), Cmax of RFP was significantly affected by body weight of the patients(r=-0.662, p value<0.01). Conclusion : In Korean patients with tuberculosis, 300mg of INH will be sufficient to reach the ideal peak blood level even in the patients over 50kg of body weight However, 450mg of RFP will not be the adequate dose in the patients who weigh over 50~60kg.

  • PDF

Differential Diagnosis By Analysis of Pleural Effusion (흉수분석에 의한 질병의 감별진단)

  • Ko, Won-Ki;Lee, Jun-Gu;Jung, Jae-Ho;Park, Mu-Suk;Jeong, Nak-Yeong;Kim, Young-Sam;Yang, Dong-Gyoo;Yoo, Nae-Choon;Ahn, Chul-Min;Kim, Sung-Kyu
    • Tuberculosis and Respiratory Diseases
    • /
    • v.51 no.6
    • /
    • pp.559-569
    • /
    • 2001
  • Background : Pleural effusion is one of the most common clinical manifestations associated with a variety of pulmonary diseases such as malignancy, tuberculosis, and pneumonia. However, there are no useful laboratory tests to determine the specific cause of pleural effusion. Therefore, an attempt was made to analyze the various types of pleural effusion and search for useful laboratory tests for pleural effusion in order to differentiate between the diseases, especially between a malignant pleural effusion and a non-malignant pleural effusion. Methods : 93 patients with a pleural effusion, who visited the Severance hospital from January 1998 to August 1999, were enrolled in this study. Ultrasound-guided thoracentesis was done and a confirmational diagnosis was made by a gram stain, bacterial culture, Ziehl-Neelsen stain, a mycobacterial culture, a pleural biopsy and cytology. Results : The male to female ratio was 56 : 37 and the average age was $47.1{\pm}21.8$ years. There were 16 cases with a malignant effusion, 12 cases with a para-malignant effusion, 36 cases with tuberculosis, 22 cases with a para-pneumonic effusion, and 7 cases with transudate. The LDH2 fraction was significantly higher in the para-malignant effusion group compared to the para-pneumonic effusion group [$30.6{\pm}6.4%$ and $20.2{\pm}7.5%$, respectively (p<0.05)] and both the LDH1 and LDH2 fraction was significantly in the para-malignant effusion group compared to those with tuberculosis [$16.4{\pm}7.2%$ vs. $7.6{\pm}4.7%$, and $30.6{\pm}6.4%$ vs.$17.6{\pm}6.3%$, respectively (p<0.05)]. The pleural effusion/serum LDH4 fraction ratio was significantly lower in the malignant effusion group compared to those with tuberculosis [$1.5{\pm}0.8$ vs. $2.1{\pm}0.6$, respectively (p<0.05)]. The LDH4 fraction and the pleural effusion/serum LDH4 fraction ratio was significantly lower in the para-malignant effusion group compared to those with tuberculosis [$17.0{\pm}5.8%$ vs. $23.5{\pm}4.6%$ and $1.3{\pm}0.4$ vs. $2.1{\pm}0.6$, respectively (p<0.05)]. Conclusion : These results suggest that the LDH isoenzyme was the only useful biochemical test for a differential diagnosis of the various diseases. In particular, the most useful test was the pleural effusion/serum LDH4 fraction ratio to distinguish between a para-malignant effusion and a tuberculous effusion.

  • PDF

Diagnostic Efficacy of FDG-PET Imaging in Solitary Pulmonary Nodule (고립성폐결절의 진단시 FDG-PET의 임상적 유용성에 관한 연구)

  • Cheon, Eun Mee;Kim, Byung-Tae;Kwon, O. Jung;Kim, Hojoong;Chung, Man Pyo;Rhee, Chong H.;Han, Yong Chol;Lee, Kyung Soo;Shim, Young Mog;Kim, Jhingook;Han, Jungho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.882-893
    • /
    • 1996
  • Background : Over one-third of solitary pulmonary nodules are malignant, but most malignant SPNs are in the early stages at diagnosis and can be cured by surgical removal. Therefore, early diagnosis of malignant SPN is essential for the lifesaving of the patient. The incidence of pulmonary tuberculosis in Korea is somewhat higher than those of other countries and a large number of SPNs are found to be tuberculoma. Most primary physicians tend to regard newly detected solitary pulmonary nodule as tuberculoma with only noninvasive imaging such as CT and they prefer clinical observation if the findings suggest benignancy without further invasive procedures. Many kinds of noninvasive procedures for confirmatory diagnosis have been introduced to differentiate malignant SPNs from benign ones, but none of them has been satisfactory. FOG-PET is a unique tool for imaging and quantifying the status of glucose metabolism. On the basis that glucose metabolism is increased in the malignant transfomled cells compared with normal cells, FDG-PET is considered to be the satisfactory noninvasive procedure which can differentiate malignant SPNs from benign SPNs. So we performed FOG-PET in patients with solitary pulmonary nodule and evaluated the diagnostic accuracy in the diagnosis of malignant SPNs. Method : 34 patients with a solitary pulmonary nodule less than 6 cm of irs diameter who visited Samsung Medical Center from Semptember, 1994 to Semptember, 1995 were evaluated prospectively. Simple chest roentgenography, chest computer tomography, FOG-PET scan were performed for all patients. The results of FOG-PET were evaluated comparing with the results of final diagnosis confirmed by sputum study, PCNA, fiberoptic bronchoscopy, or thoracotomy. Results : (I) There was no significant difference in nodule size between malignant (3.1 1.5cm) and benign nodule(2.81.0cm)(p>0.05). (2) Peal SUV(standardized uptake value) of malignant nodules (6.93.7) was significantly higher than peak SUV of benign nodules(2.71.7) and time-activity curves showed continuous increase in malignant nodules. (3) Three false negative cases were found among eighteen malignant nodule by the FDG-PET imaging study and all three cases were nonmucinous bronchioloalveolar carcinoma less than 2 em diameter. (4) FOG-PET imaging resulted in 83% sensitivity, 100% specificity, 100% positive predictive value and 84% negative predictive value. Conclusion: FOG-PET imaging is a new noninvasive diagnostic method of solitary pulmonary nodule thai has a high accuracy of differential diagnosis between malignant and benign nodule. FDG-PET imaging could be used for the differential diagnosis of SPN which is not properly diagnosed with conventional methods before thoracotomy. Considering the high accuracy of FDG-PET imaging, this procedure may play an important role in making the dicision to perform thoracotomy in diffcult cases.

  • PDF

Effects of Molecular Weight of Polyethylene Glycol on the Dimensional Stabilization of Wood (Polyethylene Glycol의 분자량(分子量)이 목재(木材)의 치수 안정화(安定化)에 미치는 영향(影響))

  • Cheon, Cheol;Oh, Joung Soo
    • Journal of Korean Society of Forest Science
    • /
    • v.71 no.1
    • /
    • pp.14-21
    • /
    • 1985
  • This study was carried out in order to prevent the devaluation of wood itself and wood products causing by anisotropy, hygroscopicity, shrinkage and swelling - properties that wood itself only have, in order to improve utility of wood, by emphasizing the natural beautiful figures of wood, to develop the dimensional stabilization techniques of wood with PEG that it is a cheap, non-toxic and the impregnation treatment is not difficult, on the effects of PEG molecular weights (200, 400, 600, 1000, 1500, 2000, 4000, 6000) and species (Pinus densiflora S. et Z., Larix leptolepis Gordon., Cryptomeria japonica D. Don., Cornus controversa Hemsl., Quercus variabilis Blume., Prunus sargentii Rehder.). The results were as follows; 1) PEG loading showed the maximum value (137.22%, Pinus densiflora, in PEG 400), the others showed that relatively slow decrease. The lower specific gravity, the more polymer loading. 2) Bulking coefficient didn't particularly show the correlation with specific gravity, for the most part, indicated the maximum values in PEG 600, except that the bulking coefficient of Quercus variabilis distributed between the range of 12-18% in PEG 400-2000. In general, the bulking coefficient of hardwood was higher than that of softwood. 3) Although there was more or less an exception according to species, volumetric swelling reduction was the greatest in PEG 400. That is, its value of Cryptomeria japonica was the greatest value with 95.0%, the others indicated more than 80% except for Prunus sargentii, while volumetric swelling reduction was decreased less than 70% as the molecular weight increase more than 1000. 4) The relative effectiveness of hardwood with high specific gravity was outstandingly higher than softwood. In general, the relative effectiveness of low molecular weight PEG was superior to those of high molecular weight PEG except that Quercus variabilis showed more than 1.6 to the total molecular weight range, while it was no significant difference as the molecular weight increase more than 4000. 5) According to the analysis of the results mentioned above, the dimensional stabilization of hardwood was more effective than softwood. Although volumetric swelling reduction was the greatest at a molecular weight of 400. In the view of polymer loading, bulking coefficiency reduction of swelling and relative effectiveness, it is desirable to use the mixture of PEG of molecular weight in the range of 200-1500. To practical use, it is recommended to study about the effects on the mixed ratio on the bulking coefficient, reduction of swelling and relative effectiveness.

  • PDF

Studies on a Plan for Afforestation at Tong-ri Beach Resort(II) -Analyses of Crown Amounts and Soil Properties in the Disaster-damage Prevention Forests of Pinus thunbergii PARL., the Valuation on Soil Properties for Planting and Planning for Afforestation- (통리(桶里) 해수욕장(海水浴場) 녹지대(綠地帶) 조성(造成)에 관(關)한 연구(硏究)(II) -곰솔 해안방재림(海岸防災林)의 수관량(樹冠量) 및 토양분석(土壤分析), 식재기반평가(植栽基盤評價) 및 녹지대계획(綠地帶計劃)-)

  • Cho, Hi Doo
    • Journal of Korean Society of Forest Science
    • /
    • v.77 no.3
    • /
    • pp.303-314
    • /
    • 1988
  • Tong-ri beach has not enough vegetation to be enjoyed by the sea bathers and to be satisfied with preventing the disaster-damages, but mixed forest near the beach can work its funtions and the old forest of Pirus thunbergii $P_{ARL}$. near the beach do a Little. Therefore it is very urgent to plant more trees near the beach for bathers and disaster-damage prevention. This study was carried out for planning an afforestation, with reporting upon the crown amounts and soil properties of disaster-damage prevention forests of P. thunbergii $P_{ARL}$. planted on the coast sand dunes in 1970 and 1976, and with reporting upon the valuation on soil properties of the lands near the beach in order to set the afforestation site. The results are as follows : 1. In disaster-damage prevention forests, crown surface area and crown volume became increasingly greater in proportion to the height. To D.B.H., crown volume also became increasingly greater in proportion, but crown surface area was directly proportional. 2. In comparison to sail characteristics of sand dune, those of the forests were in large quantity in OM, T-N and avail. $SiO_2$, and almost in the same in avail. $P_2O_5$, but in small quantity in exchangeable canons : K, Ca, Mg and Na. 3. EC, Cl and pH were in small value in the forest soils, but CEC was in large value in those soils. 4. Above facts showed that the forests fulfill their functions for preventing disaster-damages and improve their soil properties. 5. The forests have naturally been thinned up to 34% in 17 years and 39% in 11 years, and one can easily pass through the forest(planted in 1970), because of its sufficient clear-length(2.71m) and its space to pass. 6. A plan for afforestation was oracle nut after judging several sites by the evaluation on the soil properties and considering the best relaxation and the prevention of the various disaster-damages upon which were reported in the last issue. 7. Afforestation should be kept for maintaining its appropriate density for best relaxation and disaster-damage prevention.

  • PDF

If This Brand Were a Person, or Anthropomorphism of Brands Through Packaging Stories (가설품패시인(假设品牌是人), 혹통과고사포장장품패의인화(或通过故事包装将品牌拟人化))

  • Kniazeva, Maria;Belk, Russell W.
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.231-238
    • /
    • 2010
  • The anthropomorphism of brands, defined as seeing human beings in brands (Puzakova, Kwak, and Rosereto, 2008) is the focus of this study. Specifically, the research objective is to understand the ways in which brands are rendered humanlike. By analyzing consumer readings of stories found on food product packages we intend to show how marketers and consumers humanize a spectrum of brands and create meanings. Our research question considers the possibility that a single brand may host multiple or single meanings, associations, and personalities for different consumers. We start by highlighting the theoretical and practical significance of our research, explain why we turn our attention to packages as vehicles of brand meaning transfer, then describe our qualitative methodology, discuss findings, and conclude with a discussion of managerial implications and directions for future studies. The study was designed to directly expose consumers to potential vehicles of brand meaning transfer and then engage these consumers in free verbal reflections on their perceived meanings. Specifically, we asked participants to read non-nutritional stories on selected branded food packages, in order to elicit data about received meanings. Packaging has yet to receive due attention in consumer research (Hine, 1995). Until now, attention has focused solely on its utilitarian function and has generated a body of research that has explored the impact of nutritional information and claims on consumer perceptions of products (e.g., Loureiro, McCluskey and Mittelhammer, 2002; Mazis and Raymond, 1997; Nayga, Lipinski and Savur, 1998; Wansik, 2003). An exception is a recent study that turns its attention to non-nutritional packaging narratives and treats them as cultural productions and vehicles for mythologizing the brand (Kniazeva and Belk, 2007). The next step in this stream of research is to explore how such mythologizing activity affects brand personality perception and how these perceptions relate to consumers. These are the questions that our study aimed to address. We used in-depth interviews to help overcome the limitations of quantitative studies. Our convenience sample was formed with the objective of providing demographic and psychographic diversity in order to elicit variations in consumer reflections to food packaging stories. Our informants represent middle-class residents of the US and do not exhibit extreme alternative lifestyles described by Thompson as "cultural creatives" (2004). Nine people were individually interviewed on their food consumption preferences and behavior. Participants were asked to have a look at the twelve displayed food product packages and read all the textual information on the package, after which we continued with questions that focused on the consumer interpretations of the reading material (Scott and Batra, 2003). On average, each participant reflected on 4-5 packages. Our in-depth interviews lasted one to one and a half hours each. The interviews were tape recorded and transcribed, providing 140 pages of text. The products came from local grocery stores on the West Coast of the US and represented a basic range of food product categories, including snacks, canned foods, cereals, baby foods, and tea. The data were analyzed using procedures for developing grounded theory delineated by Strauss and Corbin (1998). As a result, our study does not support the notion of one brand/one personality as assumed by prior work. Thus, we reveal multiple brand personalities peacefully cohabiting in the same brand as seen by different consumers, despite marketer attempts to create more singular brand personalities. We extend Fournier's (1998) proposition, that one's life projects shape the intensity and nature of brand relationships. We find that these life projects also affect perceived brand personifications and meanings. While Fournier provides a conceptual framework that links together consumers’ life themes (Mick and Buhl, 1992) and relational roles assigned to anthropomorphized brands, we find that consumer life projects mold both the ways in which brands are rendered humanlike and the ways in which brands connect to consumers' existential concerns. We find two modes through which brands are anthropomorphized by our participants. First, brand personalities are created by seeing them through perceived demographic, psychographic, and social characteristics that are to some degree shared by consumers. Second, brands in our study further relate to consumers' existential concerns by either being blended with consumer personalities in order to connect to them (the brand as a friend, a family member, a next door neighbor) or by distancing themselves from the brand personalities and estranging them (the brand as a used car salesman, a "bunch of executives.") By focusing on food product packages, we illuminate a very specific, widely-used, but little-researched vehicle of marketing communication: brand storytelling. Recent work that has approached packages as mythmakers, finds it increasingly challenging for marketers to produce textual stories that link the personalities of products to the personalities of those consuming them, and suggests that "a multiplicity of building material for creating desired consumer myths is what a postmodern consumer arguably needs" (Kniazeva and Belk, 2007). Used as vehicles for storytelling, food packages can exploit both rational and emotional approaches, offering consumers either a "lecture" or "drama" (Randazzo, 2006), myths (Kniazeva and Belk, 2007; Holt, 2004; Thompson, 2004), or meanings (McCracken, 2005) as necessary building blocks for anthropomorphizing their brands. The craft of giving birth to brand personalities is in the hands of writers/marketers and in the minds of readers/consumers who individually and sometimes idiosyncratically put a meaningful human face on a brand.

An Intelligent Decision Support System for Selecting Promising Technologies for R&D based on Time-series Patent Analysis (R&D 기술 선정을 위한 시계열 특허 분석 기반 지능형 의사결정지원시스템)

  • Lee, Choongseok;Lee, Suk Joo;Choi, Byounggu
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.3
    • /
    • pp.79-96
    • /
    • 2012
  • As the pace of competition dramatically accelerates and the complexity of change grows, a variety of research have been conducted to improve firms' short-term performance and to enhance firms' long-term survival. In particular, researchers and practitioners have paid their attention to identify promising technologies that lead competitive advantage to a firm. Discovery of promising technology depends on how a firm evaluates the value of technologies, thus many evaluating methods have been proposed. Experts' opinion based approaches have been widely accepted to predict the value of technologies. Whereas this approach provides in-depth analysis and ensures validity of analysis results, it is usually cost-and time-ineffective and is limited to qualitative evaluation. Considerable studies attempt to forecast the value of technology by using patent information to overcome the limitation of experts' opinion based approach. Patent based technology evaluation has served as a valuable assessment approach of the technological forecasting because it contains a full and practical description of technology with uniform structure. Furthermore, it provides information that is not divulged in any other sources. Although patent information based approach has contributed to our understanding of prediction of promising technologies, it has some limitations because prediction has been made based on the past patent information, and the interpretations of patent analyses are not consistent. In order to fill this gap, this study proposes a technology forecasting methodology by integrating patent information approach and artificial intelligence method. The methodology consists of three modules : evaluation of technologies promising, implementation of technologies value prediction model, and recommendation of promising technologies. In the first module, technologies promising is evaluated from three different and complementary dimensions; impact, fusion, and diffusion perspectives. The impact of technologies refers to their influence on future technologies development and improvement, and is also clearly associated with their monetary value. The fusion of technologies denotes the extent to which a technology fuses different technologies, and represents the breadth of search underlying the technology. The fusion of technologies can be calculated based on technology or patent, thus this study measures two types of fusion index; fusion index per technology and fusion index per patent. Finally, the diffusion of technologies denotes their degree of applicability across scientific and technological fields. In the same vein, diffusion index per technology and diffusion index per patent are considered respectively. In the second module, technologies value prediction model is implemented using artificial intelligence method. This studies use the values of five indexes (i.e., impact index, fusion index per technology, fusion index per patent, diffusion index per technology and diffusion index per patent) at different time (e.g., t-n, t-n-1, t-n-2, ${\cdots}$) as input variables. The out variables are values of five indexes at time t, which is used for learning. The learning method adopted in this study is backpropagation algorithm. In the third module, this study recommends final promising technologies based on analytic hierarchy process. AHP provides relative importance of each index, leading to final promising index for technology. Applicability of the proposed methodology is tested by using U.S. patents in international patent class G06F (i.e., electronic digital data processing) from 2000 to 2008. The results show that mean absolute error value for prediction produced by the proposed methodology is lower than the value produced by multiple regression analysis in cases of fusion indexes. However, mean absolute error value of the proposed methodology is slightly higher than the value of multiple regression analysis. These unexpected results may be explained, in part, by small number of patents. Since this study only uses patent data in class G06F, number of sample patent data is relatively small, leading to incomplete learning to satisfy complex artificial intelligence structure. In addition, fusion index per technology and impact index are found to be important criteria to predict promising technology. This study attempts to extend the existing knowledge by proposing a new methodology for prediction technology value by integrating patent information analysis and artificial intelligence network. It helps managers who want to technology develop planning and policy maker who want to implement technology policy by providing quantitative prediction methodology. In addition, this study could help other researchers by proving a deeper understanding of the complex technological forecasting field.

Context Sharing Framework Based on Time Dependent Metadata for Social News Service (소셜 뉴스를 위한 시간 종속적인 메타데이터 기반의 컨텍스트 공유 프레임워크)

  • Ga, Myung-Hyun;Oh, Kyeong-Jin;Hong, Myung-Duk;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.39-53
    • /
    • 2013
  • The emergence of the internet technology and SNS has increased the information flow and has changed the way people to communicate from one-way to two-way communication. Users not only consume and share the information, they also can create and share it among their friends across the social network service. It also changes the Social Media behavior to become one of the most important communication tools which also includes Social TV. Social TV is a form which people can watch a TV program and at the same share any information or its content with friends through Social media. Social News is getting popular and also known as a Participatory Social Media. It creates influences on user interest through Internet to represent society issues and creates news credibility based on user's reputation. However, the conventional platforms in news services only focus on the news recommendation domain. Recent development in SNS has changed this landscape to allow user to share and disseminate the news. Conventional platform does not provide any special way for news to be share. Currently, Social News Service only allows user to access the entire news. Nonetheless, they cannot access partial of the contents which related to users interest. For example user only have interested to a partial of the news and share the content, it is still hard for them to do so. In worst cases users might understand the news in different context. To solve this, Social News Service must provide a method to provide additional information. For example, Yovisto known as an academic video searching service provided time dependent metadata from the video. User can search and watch partial of video content according to time dependent metadata. They also can share content with a friend in social media. Yovisto applies a method to divide or synchronize a video based whenever the slides presentation is changed to another page. However, we are not able to employs this method on news video since the news video is not incorporating with any power point slides presentation. Segmentation method is required to separate the news video and to creating time dependent metadata. In this work, In this paper, a time dependent metadata-based framework is proposed to segment news contents and to provide time dependent metadata so that user can use context information to communicate with their friends. The transcript of the news is divided by using the proposed story segmentation method. We provide a tag to represent the entire content of the news. And provide the sub tag to indicate the segmented news which includes the starting time of the news. The time dependent metadata helps user to track the news information. It also allows them to leave a comment on each segment of the news. User also may share the news based on time metadata as segmented news or as a whole. Therefore, it helps the user to understand the shared news. To demonstrate the performance, we evaluate the story segmentation accuracy and also the tag generation. For this purpose, we measured accuracy of the story segmentation through semantic similarity and compared to the benchmark algorithm. Experimental results show that the proposed method outperforms benchmark algorithms in terms of the accuracy of story segmentation. It is important to note that sub tag accuracy is the most important as a part of the proposed framework to share the specific news context with others. To extract a more accurate sub tags, we have created stop word list that is not related to the content of the news such as name of the anchor or reporter. And we applied to framework. We have analyzed the accuracy of tags and sub tags which represent the context of news. From the analysis, it seems that proposed framework is helpful to users for sharing their opinions with context information in Social media and Social news.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Evaluating Reverse Logistics Networks with Centralized Centers : Hybrid Genetic Algorithm Approach (집중형센터를 가진 역물류네트워크 평가 : 혼합형 유전알고리즘 접근법)

  • Yun, YoungSu
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.55-79
    • /
    • 2013
  • In this paper, we propose a hybrid genetic algorithm (HGA) approach to effectively solve the reverse logistics network with centralized centers (RLNCC). For the proposed HGA approach, genetic algorithm (GA) is used as a main algorithm. For implementing GA, a new bit-string representation scheme using 0 and 1 values is suggested, which can easily make initial population of GA. As genetic operators, the elitist strategy in enlarged sampling space developed by Gen and Chang (1997), a new two-point crossover operator, and a new random mutation operator are used for selection, crossover and mutation, respectively. For hybrid concept of GA, an iterative hill climbing method (IHCM) developed by Michalewicz (1994) is inserted into HGA search loop. The IHCM is one of local search techniques and precisely explores the space converged by GA search. The RLNCC is composed of collection centers, remanufacturing centers, redistribution centers, and secondary markets in reverse logistics networks. Of the centers and secondary markets, only one collection center, remanufacturing center, redistribution center, and secondary market should be opened in reverse logistics networks. Some assumptions are considered for effectively implementing the RLNCC The RLNCC is represented by a mixed integer programming (MIP) model using indexes, parameters and decision variables. The objective function of the MIP model is to minimize the total cost which is consisted of transportation cost, fixed cost, and handling cost. The transportation cost is obtained by transporting the returned products between each centers and secondary markets. The fixed cost is calculated by opening or closing decision at each center and secondary markets. That is, if there are three collection centers (the opening costs of collection center 1 2, and 3 are 10.5, 12.1, 8.9, respectively), and the collection center 1 is opened and the remainders are all closed, then the fixed cost is 10.5. The handling cost means the cost of treating the products returned from customers at each center and secondary markets which are opened at each RLNCC stage. The RLNCC is solved by the proposed HGA approach. In numerical experiment, the proposed HGA and a conventional competing approach is compared with each other using various measures of performance. For the conventional competing approach, the GA approach by Yun (2013) is used. The GA approach has not any local search technique such as the IHCM proposed the HGA approach. As measures of performance, CPU time, optimal solution, and optimal setting are used. Two types of the RLNCC with different numbers of customers, collection centers, remanufacturing centers, redistribution centers and secondary markets are presented for comparing the performances of the HGA and GA approaches. The MIP models using the two types of the RLNCC are programmed by Visual Basic Version 6.0, and the computer implementing environment is the IBM compatible PC with 3.06Ghz CPU speed and 1GB RAM on Windows XP. The parameters used in the HGA and GA approaches are that the total number of generations is 10,000, population size 20, crossover rate 0.5, mutation rate 0.1, and the search range for the IHCM is 2.0. Total 20 iterations are made for eliminating the randomness of the searches of the HGA and GA approaches. With performance comparisons, network representations by opening/closing decision, and convergence processes using two types of the RLNCCs, the experimental result shows that the HGA has significantly better performance in terms of the optimal solution than the GA, though the GA is slightly quicker than the HGA in terms of the CPU time. Finally, it has been proved that the proposed HGA approach is more efficient than conventional GA approach in two types of the RLNCC since the former has a GA search process as well as a local search process for additional search scheme, while the latter has a GA search process alone. For a future study, much more large-sized RLNCCs will be tested for robustness of our approach.