• Title/Summary/Keyword: 적용 가능성 연구

Search Result 7,925, Processing Time 0.053 seconds

Discovering Promising Convergence Technologies Using Network Analysis of Maturity and Dependency of Technology (기술 성숙도 및 의존도의 네트워크 분석을 통한 유망 융합 기술 발굴 방법론)

  • Choi, Hochang;Kwahk, Kee-Young;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.101-124
    • /
    • 2018
  • Recently, most of the technologies have been developed in various forms through the advancement of single technology or interaction with other technologies. Particularly, these technologies have the characteristic of the convergence caused by the interaction between two or more techniques. In addition, efforts in responding to technological changes by advance are continuously increasing through forecasting promising convergence technologies that will emerge in the near future. According to this phenomenon, many researchers are attempting to perform various analyses about forecasting promising convergence technologies. A convergence technology has characteristics of various technologies according to the principle of generation. Therefore, forecasting promising convergence technologies is much more difficult than forecasting general technologies with high growth potential. Nevertheless, some achievements have been confirmed in an attempt to forecasting promising technologies using big data analysis and social network analysis. Studies of convergence technology through data analysis are actively conducted with the theme of discovering new convergence technologies and analyzing their trends. According that, information about new convergence technologies is being provided more abundantly than in the past. However, existing methods in analyzing convergence technology have some limitations. Firstly, most studies deal with convergence technology analyze data through predefined technology classifications. The technologies appearing recently tend to have characteristics of convergence and thus consist of technologies from various fields. In other words, the new convergence technologies may not belong to the defined classification. Therefore, the existing method does not properly reflect the dynamic change of the convergence phenomenon. Secondly, in order to forecast the promising convergence technologies, most of the existing analysis method use the general purpose indicators in process. This method does not fully utilize the specificity of convergence phenomenon. The new convergence technology is highly dependent on the existing technology, which is the origin of that technology. Based on that, it can grow into the independent field or disappear rapidly, according to the change of the dependent technology. In the existing analysis, the potential growth of convergence technology is judged through the traditional indicators designed from the general purpose. However, these indicators do not reflect the principle of convergence. In other words, these indicators do not reflect the characteristics of convergence technology, which brings the meaning of new technologies emerge through two or more mature technologies and grown technologies affect the creation of another technology. Thirdly, previous studies do not provide objective methods for evaluating the accuracy of models in forecasting promising convergence technologies. In the studies of convergence technology, the subject of forecasting promising technologies was relatively insufficient due to the complexity of the field. Therefore, it is difficult to find a method to evaluate the accuracy of the model that forecasting promising convergence technologies. In order to activate the field of forecasting promising convergence technology, it is important to establish a method for objectively verifying and evaluating the accuracy of the model proposed by each study. To overcome these limitations, we propose a new method for analysis of convergence technologies. First of all, through topic modeling, we derive a new technology classification in terms of text content. It reflects the dynamic change of the actual technology market, not the existing fixed classification standard. In addition, we identify the influence relationships between technologies through the topic correspondence weights of each document, and structuralize them into a network. In addition, we devise a centrality indicator (PGC, potential growth centrality) to forecast the future growth of technology by utilizing the centrality information of each technology. It reflects the convergence characteristics of each technology, according to technology maturity and interdependence between technologies. Along with this, we propose a method to evaluate the accuracy of forecasting model by measuring the growth rate of promising technology. It is based on the variation of potential growth centrality by period. In this paper, we conduct experiments with 13,477 patent documents dealing with technical contents to evaluate the performance and practical applicability of the proposed method. As a result, it is confirmed that the forecast model based on a centrality indicator of the proposed method has a maximum forecast accuracy of about 2.88 times higher than the accuracy of the forecast model based on the currently used network indicators.

Development and Research into Functional Foods from Hydrolyzed Whey Protein Powder with Sialic Acid as Its Index Component - I. Repeated 90-day Oral Administration Toxicity Test using Rats Administered Hydrolyzed Whey Protein Powder containing Normal Concentration of Sialic Acid (7%) with Enzyme Separation Method - (Sialic Acid를 지표성분으로 하는 유청가수분해단백분말의 기능성식품 개발연구 - I. 효소분리로 7% Siailc Acid가 표준적으로 함유된 유청가수분해단백분말(7%)의 랫드를 이용한 90일 반복경구투여 독성시험 평가 연구 -)

  • Noh, Hye-Ji;Cho, Hyang-Hyun;Kim, Hee-Kyong
    • Journal of Dairy Science and Biotechnology
    • /
    • v.34 no.2
    • /
    • pp.99-116
    • /
    • 2016
  • We herein performed animal safety assessment in accordance with Good Laboratory Practice (GLP) regulations with the aim of developing sialic acid from glycomacropeptide (hereafter referred to as "GMP") as an index ingredient and functional component in functional foods. GMP is a type of whey protein derived from milk and a safe food, with multiple functions, such as antiviral activity. A test substance was produced containing 7% (w/w) sialic acid and mostly-hydrolyzed whey protein (hereafter referred to as "7%-GNANA") by enzymatic treatment of substrate GMP. The maximum intake test dose level was selected based on 5,000 mg/kg/day dose set for male NOEL (no-observed-effect-level) and female NOAEL (no-observed-adverse-effect-level) determined by a dose-range finding (DRF) test (GLP Center of Catholic University of Daegu, Report No. 15-NREO-001) that was previously conducted with the same test substance. To evaluate the toxicity of a repeated oral dose of the test substance in connection with the previous DRF study, 1,250, 2,500, and 5,000 mg/kg of the substance were administered by a probe into the stomachs of 6-week-old SPF Sprague-Dawley male and female rats for 90 d. Each test group consisted of 10 male and 10 female rats. To determine the toxicity index, all parameters, such as observation of common signs; measurements of body weight and food consumption; ophthalmic examination; urinalysis, electrolyte, hematological, and serum biochemical examination; measurement of organ weights during autopsy; and visual and histopathological examinations were conducted according to GLP standards. After evaluating the results based on the test toxicity assessment criteria, it was determined that NOAEL of the test substance, 7%-GNANA, was 5,000 mg/kg/day, for both male and female rats. No animal death was noted in any of the test groups, including the control group, during the study period, and there was no significant difference associated with test substance, as compared with the control group, with respect to general symptoms, body weight changes, food consumption, ophthalmic examination, urinalysis, hematological and serum biochemical examination, and electrolyte and blood coagulation tests during the administration period (P<0.05). As assessed by the effects of the test substance on organ weights, food consumption, autopsy, and histopathological safety, change in kidney weight as an indicator of male NOAEL revealed up to 20% kidney weight increase in the high-dose group (5,000 mg/kg/day) compared with the change in the control group. However, it was concluded that this effect of the test substance was minor. In the case of female rats, reduction of food consumption, increase of kidney weight, and decrease of thymus weight were observed in the high-dose group. The kidney weight increased by 10.2% (left) and 8.9% (right) in the high-dose group, with a slight dose-dependency compared with that of the control group. It was observed that the thymus weight decreased by 25.3% in the high-dose group, but it was a minor test substance-associated effect. During the autopsy, botryoid tumor was detected on the ribs of one subject in the high-dose group, but we concluded that the tumor has been caused by a naturally occurring (non-test) substance. Histopathological examination revealed lesions on the kidney, liver, spleen, and other organs in the low-dose test group. Since these lesions were considered a separate phenomenon, or naturally occurring and associated with aging, it was checked whether any target organ showed clear symptoms caused by the test substance. In conclusion, different concentrations of the test substance were fed to rats and, consequently, it was verified that only a minor effect was associated with the test substance in the high-dose (5,000 mg/kg/day) group of both male and female rats, without any other significant effects associated with the test substance. Therefore, it was concluded that NOAEL of 7%-GNANA (product name: Helicobactrol) with male and female rats as test animals was 5,000 mg/kg/day, and it thus was determined that the substance is safe for the ultimate use as an ingredient of health functional foods.

Prognostic Value of TNM Staging in Small Cell Lung Cancer (소세포폐암의 TNM 병기에 따른 예후)

  • Park, Jae-Yong;Kim, Kwan-Young;Chae, Sang-Cheol;Kim, Jeong-Seok;Kim, Kwon-Yeop;Park, Ki-Su;Cha, Seung-Ik;Kim, Chang-Ho;Kam, Sin;Jung, Tae-Hoon
    • Tuberculosis and Respiratory Diseases
    • /
    • v.45 no.2
    • /
    • pp.322-332
    • /
    • 1998
  • Background: Accurate staging is important to determine treatment modalities and to predict prognosis for the patients with lung cancer. The simple two-stage system of the Veteran's Administration Lung Cancer study Group has been used for staging of small cell lung cancer(SCLC) because treatment usually consists of chemotherapy with or without radiotherapy. However, this system does not accurately reflect segregation of patients into homogenous prognostic groups. Therefore, a variety of new staging system have been proposed as more intensive treatments including either intensive radiotherapy or surgery enter clinical trials. We evaluate the prognostic importance of TNM staging, which has the advantage of providing a uniform detailed classification of tumor spread, in patients with SCLC. Methods: The medical records of 166 patients diagnosed with SCLC between January 1989 and December 1996 were reviewed retrospectively. The influence of TNM stage on survival was analyzed in 147 patients, among 166 patients, who had complete TNM staging data. Results: Three patients were classified in stage I / II, 15 in stage III a, 78 in stage IIIb and 48 in stage IV. Survival rate at 1 and 2 years for these patients were as follows: stage I / II, 75% and 37.5% ; stage IIIa, 46.7% and 25.0% ; stage III b, 34.3% and 11.3% ; and stage IV, 2.6% and 0%. The 2-year survival rates for 84 patients who received chemotherapy(more than 2 cycles) with or without radiotherapy were as follows: stage I / II, 37.5% ; stage rna, 31.3% ; stage IIIb 13.5% ; and stage IV 0%. Overall outcome according to TNM staging was significantly different whether or not received treatment. However, there was no significant difference between stage IIIa and stage IIIb though median survival and 2-year survival rate were higher in stage IIIa than stage IIIb. Conclusion: These results suggest that the TNM staging system may be helpful for predicting the prognosis of patients with SCLC.

  • PDF

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.

Development of a Traffic Accident Prediction Model and Determination of the Risk Level at Signalized Intersection (신호교차로에서의 사고예측모형개발 및 위험수준결정 연구)

  • 홍정열;도철웅
    • Journal of Korean Society of Transportation
    • /
    • v.20 no.7
    • /
    • pp.155-166
    • /
    • 2002
  • Since 1990s. there has been an increasing number of traffic accidents at intersection. which requires more urgent measures to insure safety on intersection. This study set out to analyze the road conditions, traffic conditions and traffic operation conditions on signalized intersection. to identify the elements that would impose obstructions in safety, and to develop a traffic accident prediction model to evaluate the safety of an intersection using the cop relation between the elements and an accident. In addition, the focus was made on suggesting appropriate traffic safety policies by dealing with the danger elements in advance and on enhancing the safety on the intersection in developing a traffic accident prediction model fir a signalized intersection. The data for the study was collected at an intersection located in Wonju city from January to December 2001. It consisted of the number of accidents, the road conditions, the traffic conditions, and the traffic operation conditions at the intersection. The collected data was first statistically analyzed and then the results identified the elements that had close correlations with accidents. They included the area pattern, the use of land, the bus stopping activities, the parking and stopping activities on the road, the total volume, the turning volume, the number of lanes, the width of the road, the intersection area, the cycle, the sight distance, and the turning radius. These elements were used in the second correlation analysis. The significant level was 95% or higher in all of them. There were few correlations between independent variables. The variables that affected the accident rate were the number of lanes, the turning radius, the sight distance and the cycle, which were used to develop a traffic accident prediction model formula considering their distribution. The model formula was compared with a general linear regression model in accuracy. In addition, the statistics of domestic accidents were investigated to analyze the distribution of the accidents and to classify intersections according to the risk level. Finally, the results were applied to the Spearman-rank correlation coefficient to see if the model was appropriate. As a result, the coefficient of determination was highly significant with the value of 0.985 and the ranks among the intersections according to the risk level were appropriate too. The actual number of accidents and the predicted ones were compared in terms of the risk level and they were about the same in the risk level for 80% of the intersections.

The Prediction of Export Credit Guarantee Accident using Machine Learning (기계학습을 이용한 수출신용보증 사고예측)

  • Cho, Jaeyoung;Joo, Jihwan;Han, Ingoo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.83-102
    • /
    • 2021
  • The government recently announced various policies for developing big-data and artificial intelligence fields to provide a great opportunity to the public with respect to disclosure of high-quality data within public institutions. KSURE(Korea Trade Insurance Corporation) is a major public institution for financial policy in Korea, and thus the company is strongly committed to backing export companies with various systems. Nevertheless, there are still fewer cases of realized business model based on big-data analyses. In this situation, this paper aims to develop a new business model which can be applied to an ex-ante prediction for the likelihood of the insurance accident of credit guarantee. We utilize internal data from KSURE which supports export companies in Korea and apply machine learning models. Then, we conduct performance comparison among the predictive models including Logistic Regression, Random Forest, XGBoost, LightGBM, and DNN(Deep Neural Network). For decades, many researchers have tried to find better models which can help to predict bankruptcy since the ex-ante prediction is crucial for corporate managers, investors, creditors, and other stakeholders. The development of the prediction for financial distress or bankruptcy was originated from Smith(1930), Fitzpatrick(1932), or Merwin(1942). One of the most famous models is the Altman's Z-score model(Altman, 1968) which was based on the multiple discriminant analysis. This model is widely used in both research and practice by this time. The author suggests the score model that utilizes five key financial ratios to predict the probability of bankruptcy in the next two years. Ohlson(1980) introduces logit model to complement some limitations of previous models. Furthermore, Elmer and Borowski(1988) develop and examine a rule-based, automated system which conducts the financial analysis of savings and loans. Since the 1980s, researchers in Korea have started to examine analyses on the prediction of financial distress or bankruptcy. Kim(1987) analyzes financial ratios and develops the prediction model. Also, Han et al.(1995, 1996, 1997, 2003, 2005, 2006) construct the prediction model using various techniques including artificial neural network. Yang(1996) introduces multiple discriminant analysis and logit model. Besides, Kim and Kim(2001) utilize artificial neural network techniques for ex-ante prediction of insolvent enterprises. After that, many scholars have been trying to predict financial distress or bankruptcy more precisely based on diverse models such as Random Forest or SVM. One major distinction of our research from the previous research is that we focus on examining the predicted probability of default for each sample case, not only on investigating the classification accuracy of each model for the entire sample. Most predictive models in this paper show that the level of the accuracy of classification is about 70% based on the entire sample. To be specific, LightGBM model shows the highest accuracy of 71.1% and Logit model indicates the lowest accuracy of 69%. However, we confirm that there are open to multiple interpretations. In the context of the business, we have to put more emphasis on efforts to minimize type 2 error which causes more harmful operating losses for the guaranty company. Thus, we also compare the classification accuracy by splitting predicted probability of the default into ten equal intervals. When we examine the classification accuracy for each interval, Logit model has the highest accuracy of 100% for 0~10% of the predicted probability of the default, however, Logit model has a relatively lower accuracy of 61.5% for 90~100% of the predicted probability of the default. On the other hand, Random Forest, XGBoost, LightGBM, and DNN indicate more desirable results since they indicate a higher level of accuracy for both 0~10% and 90~100% of the predicted probability of the default but have a lower level of accuracy around 50% of the predicted probability of the default. When it comes to the distribution of samples for each predicted probability of the default, both LightGBM and XGBoost models have a relatively large number of samples for both 0~10% and 90~100% of the predicted probability of the default. Although Random Forest model has an advantage with regard to the perspective of classification accuracy with small number of cases, LightGBM or XGBoost could become a more desirable model since they classify large number of cases into the two extreme intervals of the predicted probability of the default, even allowing for their relatively low classification accuracy. Considering the importance of type 2 error and total prediction accuracy, XGBoost and DNN show superior performance. Next, Random Forest and LightGBM show good results, but logistic regression shows the worst performance. However, each predictive model has a comparative advantage in terms of various evaluation standards. For instance, Random Forest model shows almost 100% accuracy for samples which are expected to have a high level of the probability of default. Collectively, we can construct more comprehensive ensemble models which contain multiple classification machine learning models and conduct majority voting for maximizing its overall performance.

Development of Porcine Pericardial Heterograft for Clinical Application (Microscopic Analysis of Various Fixation Methods) (돼지의 심낭, 판막을 이용한 이종이식 보철편의 개발(고정 방법에 따른 조직학적 분석))

  • Kim, Kwan-Chang;Choi, Chang-Hyu;Lee, Chang-Ha;Lee, Chul;Oh, Sam-Sae;Park, Seong-Sik;Kim, Woong-Han;Kim, Kyung-Hwan;Kim, Yong-Jiin
    • Journal of Chest Surgery
    • /
    • v.41 no.3
    • /
    • pp.295-304
    • /
    • 2008
  • Background: Various experimental trials for the development of bioprosthetic devices are actively underway, secondary to the limited supply of autologous and homograft tissue to treat cardiac diseases. In this study, porcine bioprostheses that were treated with glutaraldehyde (GA), ethanol, or sodium dodecylsulfate (SDS) were examined with light microscopy and transmission electron microscopy for mechanical and physical imperfections before implantation, Material and Method: 1) Porcine pericardium, aortic valve, and pulmonary valve were examined using light microscopy and JEM-100CX II transmission electron microscopy, then compared with human pericardium and commercially produced heterografts. 2) Sections from six treated groups (GA-Ethanol, Ethanol-GA, SDS only, SDS-GA, Ethanol-SDS-GA and SDS-Ethanol-GA) were observed using the same methods. Result: 1) Porcine pericardium was composed of a serosal layer, fibrosa, and epicardial connective tissue. Treatment with GA, ethanol, or SDS had little influence on the collagen skeleton of porcine pericardium, except in the case of SDS pre-treatment. There was no alteration in the collagen skeleton of the porcine pericardium compared to commercially produced heterografts. 2) Porcine aortic valve was composed of lamina fibrosa, lamina spongiosa, and lamina ventricularis. Treatment with GA, ethanol, or SDS had little influence on these three layers and the collagen skeleton of porcine aortic valve, except in the case of SDS pre-treatment. There were no alterations in the three layers or the collagen. skeleton of porcine aortic valve compared to commercially produced heterografts. Conclusion: There was little physical and mechanical damage incurred in porcine bioprosthesis structures during various glutaraldehyde fixation processes combined with anti-calcification or decellularization treatments. However, SDS treatment preceding GA fixation changed the collagen fibers into a slightly condensed form, which degraded during transmission electron micrograph. The optimal methods and conditions for sodium dodecylsulfate (SDS) treatment need to be modified.

Extension Method of Association Rules Using Social Network Analysis (사회연결망 분석을 활용한 연관규칙 확장기법)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.111-126
    • /
    • 2017
  • Recommender systems based on association rule mining significantly contribute to seller's sales by reducing consumers' time to search for products that they want. Recommendations based on the frequency of transactions such as orders can effectively screen out the products that are statistically marketable among multiple products. A product with a high possibility of sales, however, can be omitted from the recommendation if it records insufficient number of transactions at the beginning of the sale. Products missing from the associated recommendations may lose the chance of exposure to consumers, which leads to a decline in the number of transactions. In turn, diminished transactions may create a vicious circle of lost opportunity to be recommended. Thus, initial sales are likely to remain stagnant for a certain period of time. Products that are susceptible to fashion or seasonality, such as clothing, may be greatly affected. This study was aimed at expanding association rules to include into the list of recommendations those products whose initial trading frequency of transactions is low despite the possibility of high sales. The particular purpose is to predict the strength of the direct connection of two unconnected items through the properties of the paths located between them. An association between two items revealed in transactions can be interpreted as the interaction between them, which can be expressed as a link in a social network whose nodes are items. The first step calculates the centralities of the nodes in the middle of the paths that indirectly connect the two nodes without direct connection. The next step identifies the number of the paths and the shortest among them. These extracts are used as independent variables in the regression analysis to predict future connection strength between the nodes. The strength of the connection between the two nodes of the model, which is defined by the number of nodes between the two nodes, is measured after a certain period of time. The regression analysis results confirm that the number of paths between the two products, the distance of the shortest path, and the number of neighboring items connected to the products are significantly related to their potential strength. This study used actual order transaction data collected for three months from February to April in 2016 from an online commerce company. To reduce the complexity of analytics as the scale of the network grows, the analysis was performed only on miscellaneous goods. Two consecutively purchased items were chosen from each customer's transactions to obtain a pair of antecedent and consequent, which secures a link needed for constituting a social network. The direction of the link was determined in the order in which the goods were purchased. Except for the last ten days of the data collection period, the social network of associated items was built for the extraction of independent variables. The model predicts the number of links to be connected in the next ten days from the explanatory variables. Of the 5,711 previously unconnected links, 611 were newly connected for the last ten days. Through experiments, the proposed model demonstrated excellent predictions. Of the 571 links that the proposed model predicts, 269 were confirmed to have been connected. This is 4.4 times more than the average of 61, which can be found without any prediction model. This study is expected to be useful regarding industries whose new products launch quickly with short life cycles, since their exposure time is critical. Also, it can be used to detect diseases that are rarely found in the early stages of medical treatment because of the low incidence of outbreaks. Since the complexity of the social networking analysis is sensitive to the number of nodes and links that make up the network, this study was conducted in a particular category of miscellaneous goods. Future research should consider that this condition may limit the opportunity to detect unexpected associations between products belonging to different categories of classification.

Risk Factor Analysis for Operative Death and Brain Injury after Surgery of Stanford Type A Aortic Dissection (스탠포드 A형 대동맥 박리증 수술 후 수술 사망과 뇌손상의 위험인자 분석)

  • Kim Jae-Hyun;Oh Sam-Sae;Lee Chang-Ha;Baek Man-Jong;Hwang Seong-Wook;Lee Cheul;Lim Hong-Gook;Na Chan-Young
    • Journal of Chest Surgery
    • /
    • v.39 no.4 s.261
    • /
    • pp.289-297
    • /
    • 2006
  • Background: Surgery for Stanford type A aortic dissection shows a high operative mortality rate and frequent postoperative brain injury. This study was designed to find out the risk factors leading to operative mortality and brain injury after surgical repair in patients with type A aortic dissection. Material and Method: One hundred and eleven patients with type A aortic dissection who underwent surgical repair between February, 1995 and January 2005 were reviewed retrospectively. There were 99 acute dissections and 12 chronic dissections. Univariate and multivariate analysis were performed to identify risk factors of operative mortality and brain injury. Resuit: Hospital mortality occurred in 6 patients (5.4%). Permanent neurologic deficit occurred in 8 patients (7.2%) and transient neurologic deficit in 4 (3.6%). Overall 1, 5, 7 year survival rate was 94.4, 86.3, and 81.5%, respectively. Univariate analysis revealed 4 risk factors to be statistically significant as predictors of mortality: previous chronic type III dissection, emergency operation, intimal tear in aortic arch, and deep hypothemic circulatory arrest (DHCA) for more than 45 minutes. Multivariate analysis revealed previous chronic type III aortic dissection (odds ratio (OR) 52.2), and DHCA for more than 45 minutes (OR 12.0) as risk factors of operative mortality. Pathological obesity (OR 12.9) and total arch replacement (OR 8.5) were statistically significant risk factors of brain injury in multivariate analysis. Conclusion: The result of surgical repair for Stanford type A aortic dissection was good when we took into account the mortality rate, the incidence of neurologic injury, and the long-term survival rate. Surgery of type A aortic dissection in patients with a history of chronic type III dissection may increase the risk of operative mortality. Special care should be taken and efforts to reduce the hypothermic circulatory arrest time should alway: be kept in mind. Surgeons who are planning to operate on patients with pathological obesity, or total arch replacement should be seriously consider for there is a higher risk of brain injury.

An Empirical Study on Statistical Optimization Model for the Portfolio Construction of Sponsored Search Advertising(SSA) (키워드검색광고 포트폴리오 구성을 위한 통계적 최적화 모델에 대한 실증분석)

  • Yang, Hognkyu;Hong, Juneseok;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.167-194
    • /
    • 2019
  • This research starts from the four basic concepts of incentive incompatibility, limited information, myopia and decision variable which are confronted when making decisions in keyword bidding. In order to make these concept concrete, four framework approaches are designed as follows; Strategic approach for the incentive incompatibility, Statistical approach for the limited information, Alternative optimization for myopia, and New model approach for decision variable. The purpose of this research is to propose the statistical optimization model in constructing the portfolio of Sponsored Search Advertising (SSA) in the Sponsor's perspective through empirical tests which can be used in portfolio decision making. Previous research up to date formulates the CTR estimation model using CPC, Rank, Impression, CVR, etc., individually or collectively as the independent variables. However, many of the variables are not controllable in keyword bidding. Only CPC and Rank can be used as decision variables in the bidding system. Classical SSA model is designed on the basic assumption that the CPC is the decision variable and CTR is the response variable. However, this classical model has so many huddles in the estimation of CTR. The main problem is the uncertainty between CPC and Rank. In keyword bid, CPC is continuously fluctuating even at the same Rank. This uncertainty usually raises questions about the credibility of CTR, along with the practical management problems. Sponsors make decisions in keyword bids under the limited information, and the strategic portfolio approach based on statistical models is necessary. In order to solve the problem in Classical SSA model, the New SSA model frame is designed on the basic assumption that Rank is the decision variable. Rank is proposed as the best decision variable in predicting the CTR in many papers. Further, most of the search engine platforms provide the options and algorithms to make it possible to bid with Rank. Sponsors can participate in the keyword bidding with Rank. Therefore, this paper tries to test the validity of this new SSA model and the applicability to construct the optimal portfolio in keyword bidding. Research process is as follows; In order to perform the optimization analysis in constructing the keyword portfolio under the New SSA model, this study proposes the criteria for categorizing the keywords, selects the representing keywords for each category, shows the non-linearity relationship, screens the scenarios for CTR and CPC estimation, selects the best fit model through Goodness-of-Fit (GOF) test, formulates the optimization models, confirms the Spillover effects, and suggests the modified optimization model reflecting Spillover and some strategic recommendations. Tests of Optimization models using these CTR/CPC estimation models are empirically performed with the objective functions of (1) maximizing CTR (CTR optimization model) and of (2) maximizing expected profit reflecting CVR (namely, CVR optimization model). Both of the CTR and CVR optimization test result show that the suggested SSA model confirms the significant improvements and this model is valid in constructing the keyword portfolio using the CTR/CPC estimation models suggested in this study. However, one critical problem is found in the CVR optimization model. Important keywords are excluded from the keyword portfolio due to the myopia of the immediate low profit at present. In order to solve this problem, Markov Chain analysis is carried out and the concept of Core Transit Keyword (CTK) and Expected Opportunity Profit (EOP) are introduced. The Revised CVR Optimization model is proposed and is tested and shows validity in constructing the portfolio. Strategic guidelines and insights are as follows; Brand keywords are usually dominant in almost every aspects of CTR, CVR, the expected profit, etc. Now, it is found that the Generic keywords are the CTK and have the spillover potentials which might increase consumers awareness and lead them to Brand keyword. That's why the Generic keyword should be focused in the keyword bidding. The contribution of the thesis is to propose the novel SSA model based on Rank as decision variable, to propose to manage the keyword portfolio by categories according to the characteristics of keywords, to propose the statistical modelling and managing based on the Rank in constructing the keyword portfolio, and to perform empirical tests and propose a new strategic guidelines to focus on the CTK and to propose the modified CVR optimization objective function reflecting the spillover effect in stead of the previous expected profit models.