• Title/Summary/Keyword: case tool

Search Result 2,756, Processing Time 0.029 seconds

Visualizing the Results of Opinion Mining from Social Media Contents: Case Study of a Noodle Company (소셜미디어 콘텐츠의 오피니언 마이닝결과 시각화: N라면 사례 분석 연구)

  • Kim, Yoosin;Kwon, Do Young;Jeong, Seung Ryul
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.89-105
    • /
    • 2014
  • After emergence of Internet, social media with highly interactive Web 2.0 applications has provided very user friendly means for consumers and companies to communicate with each other. Users have routinely published contents involving their opinions and interests in social media such as blogs, forums, chatting rooms, and discussion boards, and the contents are released real-time in the Internet. For that reason, many researchers and marketers regard social media contents as the source of information for business analytics to develop business insights, and many studies have reported results on mining business intelligence from Social media content. In particular, opinion mining and sentiment analysis, as a technique to extract, classify, understand, and assess the opinions implicit in text contents, are frequently applied into social media content analysis because it emphasizes determining sentiment polarity and extracting authors' opinions. A number of frameworks, methods, techniques and tools have been presented by these researchers. However, we have found some weaknesses from their methods which are often technically complicated and are not sufficiently user-friendly for helping business decisions and planning. In this study, we attempted to formulate a more comprehensive and practical approach to conduct opinion mining with visual deliverables. First, we described the entire cycle of practical opinion mining using Social media content from the initial data gathering stage to the final presentation session. Our proposed approach to opinion mining consists of four phases: collecting, qualifying, analyzing, and visualizing. In the first phase, analysts have to choose target social media. Each target media requires different ways for analysts to gain access. There are open-API, searching tools, DB2DB interface, purchasing contents, and so son. Second phase is pre-processing to generate useful materials for meaningful analysis. If we do not remove garbage data, results of social media analysis will not provide meaningful and useful business insights. To clean social media data, natural language processing techniques should be applied. The next step is the opinion mining phase where the cleansed social media content set is to be analyzed. The qualified data set includes not only user-generated contents but also content identification information such as creation date, author name, user id, content id, hit counts, review or reply, favorite, etc. Depending on the purpose of the analysis, researchers or data analysts can select a suitable mining tool. Topic extraction and buzz analysis are usually related to market trends analysis, while sentiment analysis is utilized to conduct reputation analysis. There are also various applications, such as stock prediction, product recommendation, sales forecasting, and so on. The last phase is visualization and presentation of analysis results. The major focus and purpose of this phase are to explain results of analysis and help users to comprehend its meaning. Therefore, to the extent possible, deliverables from this phase should be made simple, clear and easy to understand, rather than complex and flashy. To illustrate our approach, we conducted a case study on a leading Korean instant noodle company. We targeted the leading company, NS Food, with 66.5% of market share; the firm has kept No. 1 position in the Korean "Ramen" business for several decades. We collected a total of 11,869 pieces of contents including blogs, forum contents and news articles. After collecting social media content data, we generated instant noodle business specific language resources for data manipulation and analysis using natural language processing. In addition, we tried to classify contents in more detail categories such as marketing features, environment, reputation, etc. In those phase, we used free ware software programs such as TM, KoNLP, ggplot2 and plyr packages in R project. As the result, we presented several useful visualization outputs like domain specific lexicons, volume and sentiment graphs, topic word cloud, heat maps, valence tree map, and other visualized images to provide vivid, full-colored examples using open library software packages of the R project. Business actors can quickly detect areas by a swift glance that are weak, strong, positive, negative, quiet or loud. Heat map is able to explain movement of sentiment or volume in categories and time matrix which shows density of color on time periods. Valence tree map, one of the most comprehensive and holistic visualization models, should be very helpful for analysts and decision makers to quickly understand the "big picture" business situation with a hierarchical structure since tree-map can present buzz volume and sentiment with a visualized result in a certain period. This case study offers real-world business insights from market sensing which would demonstrate to practical-minded business users how they can use these types of results for timely decision making in response to on-going changes in the market. We believe our approach can provide practical and reliable guide to opinion mining with visualized results that are immediately useful, not just in food industry but in other industries as well.

Analysis of Twitter for 2012 South Korea Presidential Election by Text Mining Techniques (텍스트 마이닝을 이용한 2012년 한국대선 관련 트위터 분석)

  • Bae, Jung-Hwan;Son, Ji-Eun;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.141-156
    • /
    • 2013
  • Social media is a representative form of the Web 2.0 that shapes the change of a user's information behavior by allowing users to produce their own contents without any expert skills. In particular, as a new communication medium, it has a profound impact on the social change by enabling users to communicate with the masses and acquaintances their opinions and thoughts. Social media data plays a significant role in an emerging Big Data arena. A variety of research areas such as social network analysis, opinion mining, and so on, therefore, have paid attention to discover meaningful information from vast amounts of data buried in social media. Social media has recently become main foci to the field of Information Retrieval and Text Mining because not only it produces massive unstructured textual data in real-time but also it serves as an influential channel for opinion leading. But most of the previous studies have adopted broad-brush and limited approaches. These approaches have made it difficult to find and analyze new information. To overcome these limitations, we developed a real-time Twitter trend mining system to capture the trend in real-time processing big stream datasets of Twitter. The system offers the functions of term co-occurrence retrieval, visualization of Twitter users by query, similarity calculation between two users, topic modeling to keep track of changes of topical trend, and mention-based user network analysis. In addition, we conducted a case study on the 2012 Korean presidential election. We collected 1,737,969 tweets which contain candidates' name and election on Twitter in Korea (http://www.twitter.com/) for one month in 2012 (October 1 to October 31). The case study shows that the system provides useful information and detects the trend of society effectively. The system also retrieves the list of terms co-occurred by given query terms. We compare the results of term co-occurrence retrieval by giving influential candidates' name, 'Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn' as query terms. General terms which are related to presidential election such as 'Presidential Election', 'Proclamation in Support', Public opinion poll' appear frequently. Also the results show specific terms that differentiate each candidate's feature such as 'Park Jung Hee' and 'Yuk Young Su' from the query 'Guen Hae Park', 'a single candidacy agreement' and 'Time of voting extension' from the query 'Jae In Moon' and 'a single candidacy agreement' and 'down contract' from the query 'Chul Su Ahn'. Our system not only extracts 10 topics along with related terms but also shows topics' dynamic changes over time by employing the multinomial Latent Dirichlet Allocation technique. Each topic can show one of two types of patterns-Rising tendency and Falling tendencydepending on the change of the probability distribution. To determine the relationship between topic trends in Twitter and social issues in the real world, we compare topic trends with related news articles. We are able to identify that Twitter can track the issue faster than the other media, newspapers. The user network in Twitter is different from those of other social media because of distinctive characteristics of making relationships in Twitter. Twitter users can make their relationships by exchanging mentions. We visualize and analyze mention based networks of 136,754 users. We put three candidates' name as query terms-Geun Hae Park', 'Jae In Moon', and 'Chul Su Ahn'. The results show that Twitter users mention all candidates' name regardless of their political tendencies. This case study discloses that Twitter could be an effective tool to detect and predict dynamic changes of social issues, and mention-based user networks could show different aspects of user behavior as a unique network that is uniquely found in Twitter.

An Ontology Model for Public Service Export Platform (공공 서비스 수출 플랫폼을 위한 온톨로지 모형)

  • Lee, Gang-Won;Park, Sei-Kwon;Ryu, Seung-Wan;Shin, Dong-Cheon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.1
    • /
    • pp.149-161
    • /
    • 2014
  • The export of domestic public services to overseas markets contains many potential obstacles, stemming from different export procedures, the target services, and socio-economic environments. In order to alleviate these problems, the business incubation platform as an open business ecosystem can be a powerful instrument to support the decisions taken by participants and stakeholders. In this paper, we propose an ontology model and its implementation processes for the business incubation platform with an open and pervasive architecture to support public service exports. For the conceptual model of platform ontology, export case studies are used for requirements analysis. The conceptual model shows the basic structure, with vocabulary and its meaning, the relationship between ontologies, and key attributes. For the implementation and test of the ontology model, the logical structure is edited using Prot$\acute{e}$g$\acute{e}$ editor. The core engine of the business incubation platform is the simulator module, where the various contexts of export businesses should be captured, defined, and shared with other modules through ontologies. It is well-known that an ontology, with which concepts and their relationships are represented using a shared vocabulary, is an efficient and effective tool for organizing meta-information to develop structural frameworks in a particular domain. The proposed model consists of five ontologies derived from a requirements survey of major stakeholders and their operational scenarios: service, requirements, environment, enterprise, and county. The service ontology contains several components that can find and categorize public services through a case analysis of the public service export. Key attributes of the service ontology are composed of categories including objective, requirements, activity, and service. The objective category, which has sub-attributes including operational body (organization) and user, acts as a reference to search and classify public services. The requirements category relates to the functional needs at a particular phase of system (service) design or operation. Sub-attributes of requirements are user, application, platform, architecture, and social overhead. The activity category represents business processes during the operation and maintenance phase. The activity category also has sub-attributes including facility, software, and project unit. The service category, with sub-attributes such as target, time, and place, acts as a reference to sort and classify the public services. The requirements ontology is derived from the basic and common components of public services and target countries. The key attributes of the requirements ontology are business, technology, and constraints. Business requirements represent the needs of processes and activities for public service export; technology represents the technological requirements for the operation of public services; and constraints represent the business law, regulations, or cultural characteristics of the target country. The environment ontology is derived from case studies of target countries for public service operation. Key attributes of the environment ontology are user, requirements, and activity. A user includes stakeholders in public services, from citizens to operators and managers; the requirements attribute represents the managerial and physical needs during operation; the activity attribute represents business processes in detail. The enterprise ontology is introduced from a previous study, and its attributes are activity, organization, strategy, marketing, and time. The country ontology is derived from the demographic and geopolitical analysis of the target country, and its key attributes are economy, social infrastructure, law, regulation, customs, population, location, and development strategies. The priority list for target services for a certain country and/or the priority list for target countries for a certain public services are generated by a matching algorithm. These lists are used as input seeds to simulate the consortium partners, and government's policies and programs. In the simulation, the environmental differences between Korea and the target country can be customized through a gap analysis and work-flow optimization process. When the process gap between Korea and the target country is too large for a single corporation to cover, a consortium is considered an alternative choice, and various alternatives are derived from the capability index of enterprises. For financial packages, a mix of various foreign aid funds can be simulated during this stage. It is expected that the proposed ontology model and the business incubation platform can be used by various participants in the public service export market. It could be especially beneficial to small and medium businesses that have relatively fewer resources and experience with public service export. We also expect that the open and pervasive service architecture in a digital business ecosystem will help stakeholders find new opportunities through information sharing and collaboration on business processes.

The Precision Test Based on States of Bone Mineral Density (골밀도 상태에 따른 검사자의 재현성 평가)

  • Yoo, Jae-Sook;Kim, Eun-Hye;Kim, Ho-Seong;Shin, Sang-Ki;Cho, Si-Man
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.67-72
    • /
    • 2009
  • Purpose: ISCD (International Society for Clinical Densitometry) requests that users perform mandatory Precision test to raise their quality even though there is no recommendation about patient selection for the test. Thus, we investigated the effect on precision test by measuring reproducibility of 3 bone density groups (normal, osteopenia, osteoporosis). Materials and Methods: 4 users performed precision test with 420 patients (age: $57.8{\pm}9.02$) for BMD in Asan Medical Center (JAN-2008 ~ JUN-2008). In first group (A), 4 users selected 30 patient respectively regardless of bone density condition and measured 2 part (L-spine, femur) in twice. In second group (B), 4 users measured bone density of 10 patients respectively in the same manner of first group (A) users but dividing patient into 3 stages (normal, osteopenia, osteoporosis). In third group (C), 2 users measured 30 patients respectively in the same manner of first group (A) users considering bone density condition. We used GE Lunar Prodigy Advance (Encore. V11.4) and analyzed the result by comparing %CV to LSC using precision tool from ISCD. Check back was done using SPSS. Results: In group A, the %CV calculated by 4 users (a, b, c, d) were 1.16, 1.01, 1.19, 0.65 g/$cm^2$ in L-spine and 0.69, 0.58, 0.97, 0.47 g/$cm^2$ in femur. In group B, the %CV calculated by 4 users (a, b, c, d) were 1.01, 1.19, 0.83, 1.37 g/$cm^2$ in L-spine and 1.03, 0.54, 0.69, 0.58 g/$cm^2$ in femur. When comparing results (group A, B), we found no considerable differences. In group C, the user_1's %CV of normal, osteopenia and osteoporosis were 1.26, 0.94, 0.94 g/$cm^2$ in L-spine and 0.94, 0.79, 1.01 g/$cm^2$ in femur. And the user_2's %CV were 0.97, 0.83, 0.72 g/$cm^2$ L-spine and 0.65, 0.65, 1.05 g/$cm^2$ in femur. When analyzing the result, we figured out that the difference of reproducibility was almost not found but the differences of two users' several result values have effect on total reproducibility. Conclusions: Precision test is a important factor of bone density follow up. When Machine and user's reproducibility is getting better, it’s useful in clinics because of low range of deviation. Users have to check machine's reproducibility before the test and keep the same mind doing BMD test for patient. In precision test, the difference of measured value is usually found for ROI change caused by patient position. In case of osteoporosis patient, there is difficult to make initial ROI accurately more than normal and osteopenia patient due to lack of bone recognition even though ROI is made automatically by computer software. However, initial ROI is very important and users have to make coherent ROI because we use ROI Copy function in a follow up. In this study, we performed precision test considering bone density condition and found LSC value was stayed within 3%. There was no considerable difference. Thus, patient selection could be done regardless of bone density condition.

  • PDF

Chronic HBV Infection in Children: The histopathologic classification and its correlation with clinical findings (소아의 만성 B형 간염: 새로운 병리조직학적 분류와 임상 소견의 상관 분석)

  • Lee, Seon-Young;Ko, Jae-Sung;Kim, Chong-Jai;Jang, Ja-June;Seo, Jeong-Kee
    • Pediatric Gastroenterology, Hepatology & Nutrition
    • /
    • v.1 no.1
    • /
    • pp.56-78
    • /
    • 1998
  • Objective: Chronic hepatitis B infection (CHB) occurs in 6% to 10% of population in Korea. In ethinic communities where prevalence of chronic infection is high such as Korea, transmission of hepatitis B infection is either vertical (ie, by perinatal infection) or by close family contact (usually from mothers or siblings) during the first 5 years of life. The development of chronic hepatitis B infection is increasingly more common the earlier a person is exposed to the virus, particularly in fetal and neonatal life. And it progress to cirrhosis and hepatocellular carcinoma, especially in severe liver damage and perinatal infection. Histopathology of CHB is important when evaluating the final outcomes. A numerical scoring system which is a semiquantitatively assessed objective reproducible classification of chronic viral hepatitis, is a valuable tool for statistical analysis when predicting the outcome and evaluating antiviral and other therapies. In this study, a numerical scoring system (Ludwig system) was applied and compared with the conventional histological classification of De Groute. And the comparative analysis of cinical findings, family history, serology, and liver function test by histopathological findings in chronic hepatitis B of children was done. Methods: Ninety nine patients [mean age=9 years (range=17 months to 16 years)] with clinical, biochemical, serological and histological patterns of chronic HBV infection included in this study. Five of these children had hepatocelluar carcinoma. They were 83 male and 16 female children. They all underwent liver biopsies and histologic evaluation was performed by one pathologist. The biopsy specimens were classified, according to the standard criteria of De Groute as follows: normal, chronic lobular hepatitis (CLH), chronic persistent hepatitis (CPH), mild to severe chronic active hepatitis (CAH), or active cirrhosis, inactive cirrhosis, hepatocellular carcinoma (HCC). And the biopsy specimens were also assessed and scored semiquantitatively by the numerical scoring Ludwig system. Serum HBsAg, anti-HBs, HBeAg, anti-HBe, anti-HBc (IgG, IgM), and HDV were measured by radioimunoassays. Results: Male predominated in a proportion of 5.2:1 for all patients. Of 99 patients, 2 cases had normal, 2 cases had CLH, 22 cases had CPH, 40 cases had mild CAH, 19 cases had moderate CAH, 1 case had severe CAH, 7 cases had active cirrhosis, 1 case had inactive cirrhosis, and 5 cases had HCC. The mean age, sex distribution, symptoms, signs, and family history did not differ statistically among the different histologic groups. The numerical scoring system was correlated well with the conventional histological classification. The histological activity evaluated by both the conventional classification and the scoring system was more severe as the levels of serum aminotransferases were higher. In contrast, the levels of serum aminotransferases were not useful for predicting the degree of histologic activity because of its wide range overlapping. When the histological activity was more severe and especially the cirrhosis more progressing, the prothrombin time was more prolonged. The histological severity was inversely related with the duration of seroconversion of HBeAg. Conclusions: The histological activity could not be accurately predicted by clinical and biochemical findings, but by the proper histological classification of the numerical scoring system for the biopsy specimen. The numerical scoring system was correlated well with the conventional histological classification, and it seems to be a valuable tool for the statistical analysis when predicting the outcome and evaluating effects of antiviral and other therapies in chronic hepatitis B in children.

  • PDF

A Study on Public Interest-based Technology Valuation Models in Water Resources Field (수자원 분야 공익형 기술가치평가 시스템에 대한 연구)

  • Ryu, Seung-Mi;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.177-198
    • /
    • 2018
  • Recently, as economic property it has become necessary to acquire and utilize the framework for water resource measurement and performance management as the property of water resources changes to hold "public property". To date, the evaluation of water technology has been carried out by feasibility study analysis or technology assessment based on net present value (NPV) or benefit-to-cost (B/C) effect, however it is not yet systemized in terms of valuation models to objectively assess an economic value of technology-based business to receive diffusion and feedback of research outcomes. Therefore, K-water (known as a government-supported public company in Korea) company feels the necessity to establish a technology valuation framework suitable for technical characteristics of water resources fields in charge and verify an exemplified case applied to the technology. The K-water evaluation technology applied to this study, as a public interest goods, can be used as a tool to measure the value and achievement contributed to society and to manage them. Therefore, by calculating the value in which the subject technology contributed to the entire society as a public resource, we make use of it as a basis information for the advertising medium of performance on the influence effect of the benefits or the necessity of cost input, and then secure the legitimacy for large-scale R&D cost input in terms of the characteristics of public technology. Hence, K-water company, one of the public corporation in Korea which deals with public goods of 'water resources', will be able to establish a commercialization strategy for business operation and prepare for a basis for the performance calculation of input R&D cost. In this study, K-water has developed a web-based technology valuation model for public interest type water resources based on the technology evaluation system that is suitable for the characteristics of a technology in water resources fields. In particular, by utilizing the evaluation methodology of the Institute of Advanced Industrial Science and Technology (AIST) in Japan to match the expense items to the expense accounts based on the related benefit items, we proposed the so-called 'K-water's proprietary model' which involves the 'cost-benefit' approach and the FCF (Free Cash Flow), and ultimately led to build a pipeline on the K-water research performance management system and then verify the practical case of a technology related to "desalination". We analyze the embedded design logic and evaluation process of web-based valuation system that reflects characteristics of water resources technology, reference information and database(D/B)-associated logic for each model to calculate public interest-based and profit-based technology values in technology integrated management system. We review the hybrid evaluation module that reflects the quantitative index of the qualitative evaluation indices reflecting the unique characteristics of water resources and the visualized user-interface (UI) of the actual web-based evaluation, which both are appended for calculating the business value based on financial data to the existing web-based technology valuation systems in other fields. K-water's technology valuation model is evaluated by distinguishing between public-interest type and profitable-type water technology. First, evaluation modules in profit-type technology valuation model are designed based on 'profitability of technology'. For example, the technology inventory K-water holds has a number of profit-oriented technologies such as water treatment membranes. On the other hand, the public interest-type technology valuation is designed to evaluate the public-interest oriented technology such as the dam, which reflects the characteristics of public benefits and costs. In order to examine the appropriateness of the cost-benefit based public utility valuation model (i.e. K-water specific technology valuation model) presented in this study, we applied to practical cases from calculation of benefit-to-cost analysis on water resource technology with 20 years of lifetime. In future we will additionally conduct verifying the K-water public utility-based valuation model by each business model which reflects various business environmental characteristics.

Development of Adjustable Head holder Couch in H&N Cancer Radiation Therapy (두경부암 방사선 치료 시 Set-Up 조정 Head Holder 장치의 개발)

  • Shim, JaeGoo;Song, KiWon;Kim, JinMan;Park, MyoungHwan
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.26 no.1
    • /
    • pp.43-50
    • /
    • 2014
  • In case of all patients who receive radiation therapy, a treatment plan is established and all steps of treatment are planned in the same geometrical condition. In case of head and neck cancer patients who undergo simulated treatment through computed tomography (CT), patients are fixed onto a table for planning, but laid on the top of the treatment table in the radiation therapy room. This study excogitated and fabricated an adjustable holder for head and neck cancer patients to fix patient's position and geometrical discrepancies when performing radiation therapy on head and neck cancer patients, and compared the error before and after adjusting the position of patients due to difference in weight to evaluate the correlation between patients' weight and range of error. Computed tomography system(High Advantage, GE, USA) is used for phantom to maintain the supine position to acquire the images of the therapy site for IMRT. IMRT 4MV X-rays was used by applying the LINAC(21EX, Varian, U.S.A). Treatment planning system (Pinnacle, ver. 9.1h, Philips, Madison, USA) was used. The setup accuracy was compared with each measurement was repeated five times for each weight (0, 15, and 30Kg) and CBCT was performed 30 times to find the mean and standard deviation of errors before and after the adjustment of each weight. SPSS ver.19.0(SPSS Inc., Chicago, IL,USA) statistics program was used to perform the Wilcoxon Rank test for significance evaluation and the Spearman analysis was used as the tool to analyze the significance evaluation of the correlation of weight. As a result of measuring the error values from CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.4{\pm}0.8mm$, $0.8{\pm}0.4mm$, 0 for 0Kg before the adjustment. In 15Kg CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.2{\pm}0.8mm$, $1.2{\pm}0.4mm$, $2.0{\pm}0.4mm$. After adjusting position was X,Y,Z axis was $0.2{\pm}0.4mm$, $0.4{\pm}0.5mm$, $0.4{\pm}0.5mm$. In 30Kg CBCT before and after adjusting the position due to the weight difference, X,Y,Z axis was $0.8{\pm}0.4mm$, $2.4{\pm}0.5mm$, $4.4{\pm}0.8mm$. After adjusting position was X,Y,Z axis was $0.6{\pm}0.5mm$, $1.0{\pm}0mm$, $0.6{\pm}0.5mm$. When the holder for the head and neck cancer was used to adjust the ab.0ove error value, the error values from CBCT were $0.2{\pm}0.8mm$ for the X axis, $0.40{\pm}0.54mm$ for Y axis, and 0 for Z axis. As a result of statistically analyzing each value before and after the adjustment the value was significant with p<0.034 at the Z axis with 15Kg of weight and with p<0.038 and p<0.041 at the Y and Z axes respectively with 30Kg of weight. There was a significant difference with p<0.008 when the analysis was performed through Kruscal-Wallis in terms of the difference in the adjusted values of the three weight groups. As it could reduce the errors, patients' reproduction could be improved for more precise and accurate radiation therapy. Development of an adjustable device for head and neck cancer patients is significant because it improves the reproduction of existing equipment by reducing the errors in patients' position.

The study on Standard Uptake Value(SUV) change according to time input difference in PET/CT scan (PET/CT 검사에서 시간 입력 차이에 따른 표준화섭취계수(SUV) 변화에 대한 고찰)

  • Kim, Kyung-Sik;Lee, Ju-Young;Jung, Woo-Young;Kim, Jung-Sun
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.2
    • /
    • pp.21-25
    • /
    • 2011
  • Purpose: $^{18}F$-FDG Fusion Whole Body PET scan is performed approximately 1 hour after injecting $^{18}F$-FDG. At this point in the injection procedure, as a tool of the criteria of time input, time of clocks or computers can be used and in the scan procedure, time of workstation can be used. In case that synchronized time input is not done in the injection and scan procedures for PET scan, time error from injection to scan can occur. This time error may affect Standard Uptake Value (SUV) being used as quantitative assessment. Therefore, in this study, we analyzed the change of SUV according to time input difference and necessity of time synchronization. Materials and Methods: The analysis was performed to 30 patients ($54.8{\pm}15.5$ years old) who examined $^{18}F$-FDG Fusion Whole Body PET scan in Department of nuclear medicine, Asan Medical Center from December 2009 to February 2010. To observe the change of SUV according to time input difference, the image was reconstructed and analyzed by artificially changing time difference of 1, 2, 3, 5, 10, 20 min against the same patients based on 60 minutes. Result: SUV of the image that reconstructed the images of 30 patients by giving intervals of 1, 2, 3, 5, 10, 20 min respectively and the image that entered original time was compared and analyzed through paired t-test. Based on 0 minute, mean SUV of aorta was changed by 0.3, 1.1, 1.4, 3.2, 4.7, 12.5% respectively and there was no statistically significant difference in 1, 2 minutes (p>0.05) but there was significant difference in 3, 5, 10, 20 min (p<0.05). The changes of $SUV_{avg}$ of liver were 1.6, 2.5, 3.0, 4.2, 6.6, 12.8% in 1, 2, 3, 5, 10, 20 min respectively and the changes of $SUV_{max}$ of primary lesion were 1.0, 1.5, 2.2, 3.5, 6.6, 12.8% respectively (p<0.05). Conclusion: Errors may occur in the process of measuring or recording variables affecting SUV such as height and weight of patients, $^{18}F$-FDG dose, Emission scan start time etc. and as these errors are more, the accurate assessment of SUV is interfered. Therefore, in order to assess SUV more accurately, it is thought that efforts to minimize these errors should be made. Of these efforts, time synchronization will be a cornerstone for accurate scanning.

  • PDF

Comparison of Natural Flow Estimates for the Han River Basin Using TANK and SWAT Models (TANK 모형과 SWAT 모형을 이용한 한강유역의 자연유출량 산정 비교)

  • Kim, Chul-Gyum;Kim, Nam-Won
    • Journal of Korea Water Resources Association
    • /
    • v.45 no.3
    • /
    • pp.301-316
    • /
    • 2012
  • Two models, TANK and SWAT (Soil and Water Assessment Tool) were compared for simulating natural flows in the Paldang Dam upstream areas of the Han River basin in order to understand the limitations of TANK and to review the applicability and capability of SWAT. For comparison, simulation results from the previous research work were used. In the results for the calibrated watersheds (Chungju Dam and Soyanggang Dam), two models provided promising results for forecasting of daily flows with the Nash-Sutcliffe model efficiency of around 0.8. TANK simulated observations during some peak flood seasons better than SWAT, while it showed poor results during dry seasons, especially its simulations did not fall down under a certain value. It can be explained that TANK was calibrated for relatively larger flows than smaller ones. SWAT results showed a relatively good agreement with observed flows except some flood flows, and simulated inflows at the Paldang Dam considering discharges from upper dams coincided with observations with the model efficiency of around 0.9. This accounts for SWAT applicability with higher accuracy in predicting natural flows without dam operation or artificial water uses, and in assessing flow variations before and after dam development. Also, two model results were compared for other watersheds such as Pyeongchang-A, Dalcheon-B, Seomgang-B, Inbuk-A, Hangang-D, and Hongcheon-A to which calibrated TANK parameters were applied. The results were similar to the case of calibrated watersheds, that TANK simulated poor smaller flows except some flood flows and had same problem of keeping on over a certain value in dry seasons. This indicates that TANK application may have fatal uncertainties in estimating low flows used as an important index in water resources planning and management. Therefore, in order to reflect actually complex and complicated physical characteristics of Korean watersheds, and to manage efficiently water resources according to the land use and water use changes with urbanization or climate change in the future, it is necessary to utilize a physically based watershed model like SWAT rather than an existing conceptual lumped model like TANK.

GPU Based Feature Profile Simulation for Deep Contact Hole Etching in Fluorocarbon Plasma

  • Im, Yeon-Ho;Chang, Won-Seok;Choi, Kwang-Sung;Yu, Dong-Hun;Cho, Deog-Gyun;Yook, Yeong-Geun;Chun, Poo-Reum;Lee, Se-A;Kim, Jin-Tae;Kwon, Deuk-Chul;Yoon, Jung-Sik;Kim3, Dae-Woong;You, Shin-Jae
    • Proceedings of the Korean Vacuum Society Conference
    • /
    • 2012.08a
    • /
    • pp.80-81
    • /
    • 2012
  • Recently, one of the critical issues in the etching processes of the nanoscale devices is to achieve ultra-high aspect ratio contact (UHARC) profile without anomalous behaviors such as sidewall bowing, and twisting profile. To achieve this goal, the fluorocarbon plasmas with major advantage of the sidewall passivation have been used commonly with numerous additives to obtain the ideal etch profiles. However, they still suffer from formidable challenges such as tight limits of sidewall bowing and controlling the randomly distorted features in nanoscale etching profile. Furthermore, the absence of the available plasma simulation tools has made it difficult to develop revolutionary technologies to overcome these process limitations, including novel plasma chemistries, and plasma sources. As an effort to address these issues, we performed a fluorocarbon surface kinetic modeling based on the experimental plasma diagnostic data for silicon dioxide etching process under inductively coupled C4F6/Ar/O2 plasmas. For this work, the SiO2 etch rates were investigated with bulk plasma diagnostics tools such as Langmuir probe, cutoff probe and Quadruple Mass Spectrometer (QMS). The surface chemistries of the etched samples were measured by X-ray Photoelectron Spectrometer. To measure plasma parameters, the self-cleaned RF Langmuir probe was used for polymer deposition environment on the probe tip and double-checked by the cutoff probe which was known to be a precise plasma diagnostic tool for the electron density measurement. In addition, neutral and ion fluxes from bulk plasma were monitored with appearance methods using QMS signal. Based on these experimental data, we proposed a phenomenological, and realistic two-layer surface reaction model of SiO2 etch process under the overlying polymer passivation layer, considering material balance of deposition and etching through steady-state fluorocarbon layer. The predicted surface reaction modeling results showed good agreement with the experimental data. With the above studies of plasma surface reaction, we have developed a 3D topography simulator using the multi-layer level set algorithm and new memory saving technique, which is suitable in 3D UHARC etch simulation. Ballistic transports of neutral and ion species inside feature profile was considered by deterministic and Monte Carlo methods, respectively. In case of ultra-high aspect ratio contact hole etching, it is already well-known that the huge computational burden is required for realistic consideration of these ballistic transports. To address this issue, the related computational codes were efficiently parallelized for GPU (Graphic Processing Unit) computing, so that the total computation time could be improved more than few hundred times compared to the serial version. Finally, the 3D topography simulator was integrated with ballistic transport module and etch reaction model. Realistic etch-profile simulations with consideration of the sidewall polymer passivation layer were demonstrated.

  • PDF