• Title/Summary/Keyword: Reference Value

Search Result 2,287, Processing Time 0.034 seconds

Trend and Forecast of the Medical Care Utilization Rate, the Medical Expense per Case and the Treatment Days per Cage in Medical Insurance Program for Employees by ARIMA Model (ARIMA모델에 의한 피용자(被傭者) 의료보험(醫療保險) 수진율(受診率), 건당진료비(件當診療費) 및 건당진료일수(件當診療日數)의 추이(推移)와 예측(豫測))

  • Jang, Kyu-Pyo;Kam, Sin;Park, Jae-Yong
    • Journal of Preventive Medicine and Public Health
    • /
    • v.24 no.3 s.35
    • /
    • pp.441-458
    • /
    • 1991
  • The objective of this study was to provide basic reference data for stabilization scheme of medical insurance benefits through forecasting of the medical care utilization rate, the medical expence per case, and the treatment days per case in medical insurance program for government employees & private school teachers and for industrial workers. For the achievement of above objective, this study was carried out by Box-Jenkins time series analysis (ARIMA Model), using monthly statistical data from Jan. 1979 to Dec. 1989, of medical insurance program for government employees & private school teachers and for industrial workers. The results are as follows ; ARIMA model of the medical care utilization rate in medical insurance program for government employees & private school teachers was ARIMA (1, 1, 1) and it for outpatient in medical insurance program for industrial workers was ARIMA (1, 1, 1), while it for inpatient in medical insurance program for industrial workers was ARIMA (1, 0, 1). ARIMA model of the medical expense per case in medical insurance program for government employees & private school teachers and for outpatient in medical insurance program for industrial workers were ARIMA (1, 1, 0), while it for inpatient in medical insurance program for industrial workers was ARIMA (1, 0, 1). ARIMA model of the treatment days per case of both medical insurance program for government employees & private school teachers and industrial workers were ARIMA (1, 1, 1). Forecasting value of the medical care utilzation rate for inpatient in medical insurance program for government employees & private school teachers was 0.0061 at dec. 1989, 0.0066 at dec. 1994 and it for outpatient was 0.280 at dec. 1989, 0.294 at dec. 1994, while it for inpatient in medical insurance program for industrial workers was 0.0052 at dec. 1989, 0.0056 at dec. 1994 and it for outpatient was 0.203 at dec. 1989, 0.215 at 1994. Forecasting value of the medical expense per case for inpatient in medical insurance program for government employees & private school teachers was 332,751 at dec. 1989, 354,511 at dec. 1994 and it for outpatient was 11,925 at dec. 1989, 12,904 at dec. 1994, while it for inpatient in medical insurance program for industrial workers was 281,835 at dec. 1989, 293,973 at dec. 1994 and it for outpatient was 11,599 at dec. 1989, 11,585 at 1994. Forecasting value of the treatment days per case for inpatient in medical insurance program for government employees & private school teachers was 13.79 at dec. 1989,13.85 at an. 1994 and in for outpatient was 5.03 at dec. 1989, 5.00 at dec. 1994, while it for inpatient in medical insurance program for industrial workers was 12.23 at dec. 1989, 12.85 at dec. 1994 and it for outpatient was 4.61 at dec. 1989, 4.60 at 1994.

  • PDF

A study on the prediction of korean NPL market return (한국 NPL시장 수익률 예측에 관한 연구)

  • Lee, Hyeon Su;Jeong, Seung Hwan;Oh, Kyong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.123-139
    • /
    • 2019
  • The Korean NPL market was formed by the government and foreign capital shortly after the 1997 IMF crisis. However, this market is short-lived, as the bad debt has started to increase after the global financial crisis in 2009 due to the real economic recession. NPL has become a major investment in the market in recent years when the domestic capital market's investment capital began to enter the NPL market in earnest. Although the domestic NPL market has received considerable attention due to the overheating of the NPL market in recent years, research on the NPL market has been abrupt since the history of capital market investment in the domestic NPL market is short. In addition, decision-making through more scientific and systematic analysis is required due to the decline in profitability and the price fluctuation due to the fluctuation of the real estate business. In this study, we propose a prediction model that can determine the achievement of the benchmark yield by using the NPL market related data in accordance with the market demand. In order to build the model, we used Korean NPL data from December 2013 to December 2017 for about 4 years. The total number of things data was 2291. As independent variables, only the variables related to the dependent variable were selected for the 11 variables that indicate the characteristics of the real estate. In order to select the variables, one to one t-test and logistic regression stepwise and decision tree were performed. Seven independent variables (purchase year, SPC (Special Purpose Company), municipality, appraisal value, purchase cost, OPB (Outstanding Principle Balance), HP (Holding Period)). The dependent variable is a bivariate variable that indicates whether the benchmark rate is reached. This is because the accuracy of the model predicting the binomial variables is higher than the model predicting the continuous variables, and the accuracy of these models is directly related to the effectiveness of the model. In addition, in the case of a special purpose company, whether or not to purchase the property is the main concern. Therefore, whether or not to achieve a certain level of return is enough to make a decision. For the dependent variable, we constructed and compared the predictive model by calculating the dependent variable by adjusting the numerical value to ascertain whether 12%, which is the standard rate of return used in the industry, is a meaningful reference value. As a result, it was found that the hit ratio average of the predictive model constructed using the dependent variable calculated by the 12% standard rate of return was the best at 64.60%. In order to propose an optimal prediction model based on the determined dependent variables and 7 independent variables, we construct a prediction model by applying the five methodologies of discriminant analysis, logistic regression analysis, decision tree, artificial neural network, and genetic algorithm linear model we tried to compare them. To do this, 10 sets of training data and testing data were extracted using 10 fold validation method. After building the model using this data, the hit ratio of each set was averaged and the performance was compared. As a result, the hit ratio average of prediction models constructed by using discriminant analysis, logistic regression model, decision tree, artificial neural network, and genetic algorithm linear model were 64.40%, 65.12%, 63.54%, 67.40%, and 60.51%, respectively. It was confirmed that the model using the artificial neural network is the best. Through this study, it is proved that it is effective to utilize 7 independent variables and artificial neural network prediction model in the future NPL market. The proposed model predicts that the 12% return of new things will be achieved beforehand, which will help the special purpose companies make investment decisions. Furthermore, we anticipate that the NPL market will be liquidated as the transaction proceeds at an appropriate price.

Analysis of HBeAg and HBV DNA Detection in Hepatitis B Patients Treated with Antiviral Therapy (항 바이러스 치료중인 B형 간염환자에서 HBeAg 및 HBV DNA 검출에 관한 분석)

  • Cheon, Jun Hong;Chae, Hong Ju;Park, Mi Sun;Lim, Soo Yeon;Yoo, Seon Hee;Lee, Sun Ho
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.23 no.1
    • /
    • pp.35-39
    • /
    • 2019
  • Purpose Hepatitis B virus (hepatitis B virus, HBV) infection is a worldwide major public health problem and it is known as a major cause of chronic hepatitis, liver cirrhosis and liver cancer. And serologic tests of hepatitis B virus is essential for diagnosing and treating these diseases. In addition, with the development of molecular diagnostics, the detection of HBV DNA in serum diagnoses HBV infection and is recognized as an important indicator for the antiviral agent treatment response assessment. We performed HBeAg assay using Immunoradiometric assay (IRMA) and Chemiluminescent Microparticle Immunoassay (CMIA) in hepatitis B patients treated with antiviral agents. The detection rate of HBV DNA in serum was measured and compared by RT-PCR (Real Time - Polymerase Chain Reaction) method Materials and Methods HBeAg serum examination and HBV DNA quantification test were conducted on 270 hepatitis B patients undergoing anti-virus treatment after diagnosis of hepatitis B virus infection. Two serologic tests (IRMA, CMIA) with different detection principles were applied for the HBeAg serum test. Serum HBV DNA was quantitatively measured by real-time polymerase chain reaction (RT-PCR) using the Abbott m2000 System. Results The detection rate of HBeAg was 24.1% (65/270) for IRMA and 82.2% (222/270) for CMIA. Detection rate of serum HBV DNA by real-time RT-PCR is 29.3% (79/270). The measured amount of serum HBV DNA concentration is $4.8{\times}10^7{\pm}1.9{\times}10^8IU/mL$($mean{\pm}SD$). The minimum value is 16IU/mL, the maximum value is $1.0{\times}10^9IU/mL$, and the reference value for quantitative detection limit is 15IU/mL. The detection rates and concentrations of HBV DNA by group according to the results of HBeAg serological (IRMA, CMIA)tests were as follows. 1) Group I (IRMA negative, CMIA positive, N = 169), HBV DNA detection rate of 17.7% (30/169), $6.8{\times}10^5{\pm}1.9{\times}10^6IU/mL$ 2) Group II (IRMA positive, CMIA positive, N = 53), HBV DNA detection rate 62.3% (33/53), $1.1{\times}10^8{\pm}2.8{\times}10^8IU/mL$ 3) Group III (IRMA negative, CMIA negative, N = 36), HBV DNA detection rate 36.1% (13/36), $3.0{\times}10^5{\pm}1.1{\times}10^6IU/mL$ 4) Group IV(IRMA positive, CMIA negative, N = 12), HBV DNA detection rate 25% (3/12), $1.3{\times}10^3{\pm}1.1{\times}10^3IU/mL$ Conclusion HBeAg detection rate according to the serological test showed a large difference. This difference is considered for a number of reasons such as characteristics of the Ab used for assay kit and epitope, HBV of genotype. Detection rate and the concentration of the group-specific HBV DNA classified serologic results confirmed the high detection rate and the concentration in Group II (IRMA-positive, CMIA positive, N = 53).

A Study on People Counting in Public Metro Service using Hybrid CNN-LSTM Algorithm (Hybrid CNN-LSTM 알고리즘을 활용한 도시철도 내 피플 카운팅 연구)

  • Choi, Ji-Hye;Kim, Min-Seung;Lee, Chan-Ho;Choi, Jung-Hwan;Lee, Jeong-Hee;Sung, Tae-Eung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.131-145
    • /
    • 2020
  • In line with the trend of industrial innovation, IoT technology utilized in a variety of fields is emerging as a key element in creation of new business models and the provision of user-friendly services through the combination of big data. The accumulated data from devices with the Internet-of-Things (IoT) is being used in many ways to build a convenience-based smart system as it can provide customized intelligent systems through user environment and pattern analysis. Recently, it has been applied to innovation in the public domain and has been using it for smart city and smart transportation, such as solving traffic and crime problems using CCTV. In particular, it is necessary to comprehensively consider the easiness of securing real-time service data and the stability of security when planning underground services or establishing movement amount control information system to enhance citizens' or commuters' convenience in circumstances with the congestion of public transportation such as subways, urban railways, etc. However, previous studies that utilize image data have limitations in reducing the performance of object detection under private issue and abnormal conditions. The IoT device-based sensor data used in this study is free from private issue because it does not require identification for individuals, and can be effectively utilized to build intelligent public services for unspecified people. Especially, sensor data stored by the IoT device need not be identified to an individual, and can be effectively utilized for constructing intelligent public services for many and unspecified people as data free form private issue. We utilize the IoT-based infrared sensor devices for an intelligent pedestrian tracking system in metro service which many people use on a daily basis and temperature data measured by sensors are therein transmitted in real time. The experimental environment for collecting data detected in real time from sensors was established for the equally-spaced midpoints of 4×4 upper parts in the ceiling of subway entrances where the actual movement amount of passengers is high, and it measured the temperature change for objects entering and leaving the detection spots. The measured data have gone through a preprocessing in which the reference values for 16 different areas are set and the difference values between the temperatures in 16 distinct areas and their reference values per unit of time are calculated. This corresponds to the methodology that maximizes movement within the detection area. In addition, the size of the data was increased by 10 times in order to more sensitively reflect the difference in temperature by area. For example, if the temperature data collected from the sensor at a given time were 28.5℃, the data analysis was conducted by changing the value to 285. As above, the data collected from sensors have the characteristics of time series data and image data with 4×4 resolution. Reflecting the characteristics of the measured, preprocessed data, we finally propose a hybrid algorithm that combines CNN in superior performance for image classification and LSTM, especially suitable for analyzing time series data, as referred to CNN-LSTM (Convolutional Neural Network-Long Short Term Memory). In the study, the CNN-LSTM algorithm is used to predict the number of passing persons in one of 4×4 detection areas. We verified the validation of the proposed model by taking performance comparison with other artificial intelligence algorithms such as Multi-Layer Perceptron (MLP), Long Short Term Memory (LSTM) and RNN-LSTM (Recurrent Neural Network-Long Short Term Memory). As a result of the experiment, proposed CNN-LSTM hybrid model compared to MLP, LSTM and RNN-LSTM has the best predictive performance. By utilizing the proposed devices and models, it is expected various metro services will be provided with no illegal issue about the personal information such as real-time monitoring of public transport facilities and emergency situation response services on the basis of congestion. However, the data have been collected by selecting one side of the entrances as the subject of analysis, and the data collected for a short period of time have been applied to the prediction. There exists the limitation that the verification of application in other environments needs to be carried out. In the future, it is expected that more reliability will be provided for the proposed model if experimental data is sufficiently collected in various environments or if learning data is further configured by measuring data in other sensors.

Analysis of Quantitative Indices in Tl-201 Myocardial Perfusion SPECT: Comparison of 4DM, QPS, and ECT Program (Tl-201 심근 관류 SPECT에서 4DM, QPS, ECT 프로그램의 정량적 지표 비교 분석)

  • Lee, Dong-Hun;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.3
    • /
    • pp.67-75
    • /
    • 2009
  • Purpose: As to the analytical method of data, the various programs in which it is used for the quantitative rating of the Tl-201 myocardial perfusion SPECT are reported that there is a difference. Therefore, the measured value error of the mutual program is expected to be generated even if the quantitative analysis is made against data of the same patient. Using quantitative index that able to represent myocardial perfusion defect level, we aimed to determine correlation among three myocardial perfusion analysis programs 4DM (4DMSPECT), QPS (Quantitative Perfusion SPECT), ECT (Emory Cardiac Toolbox) that be used generally in most departments of Nuclear Medicine. Materials and Methods: We analyzed the 145 patients who were examined by Tl-201 gated myocardial perfusion SPECT in department of nuclear medicine at Asan Mediacal Center from December 1th 2008 to February 14th 2008. We sorted as normal group and abnormal group. Normal group consist of 80 patients (Male/Female=38/42, age:$65.1{\pm}9.9$) who have low possibility of cardiovascular disease. And abnormal group consist of 65 patients (Male/Female=45/20, age:$63.0{\pm}8.7$) who were diagnosed cardiovascular disease with reversible perfusion defect or fixed perfusion defect through myocardial perfusion SPECT results. Using the 4DM, QPS, and ECT programs, the total defect extent (TDE) such as LAD, LCX, RCA and the summed stress score (SSS) have been analysed for their correlations and statistical comparison with the paried t-test for the quantitative indices analysed from each group. Results: The correlation of 4DM:QPS, QPS:ECT, ECT:4DM each group result from 145 patients is 0.84, 0.86, 0.82 at SSS, 0.87, 0.84, 0.87 at TDE, and both index showed good correlation. In paired t-test and Bland-Altman analysis results showed no statistically significant difference in the comparison of QPS:ECT at the mean SSS and TDE, 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index. The correlation of 4DM:QPS, QPS:ECT, ECT:4DM program results from abnormal group (65 patients) is 0.72, 0.72, 0.70 at SSS and 0.77, 0.70, 0.77 at TDE and TDE and SSS has a good correlation. In abnormal group, paired t-test and Bland-Altman analysis results showed no statistically significant difference at QPS:ECT SSS (p=0.89) and TDE (p=0.23) comparison, 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index (p<0.01). In normal group (80 patients), paired t-test and Bland-Altman analysis results showed no statistically significant difference at QPS:ECT SSS (p=0.95) and TDE (p=0.73) comparison. And 4DM:QPS, ECT:4DM comparative analysis results showed statistically significant difference at SSS and TDE index (p<0.01). Conclusions: The perfusion defect of the Tl-201 myocardial perfusion SPECT was analyzed in not only the patient in whom it has the cardiovascular disease but also the patient in whom the possibility of the cardiovascular disease is few. In the comparison of the all group research, the mean of the TDE and SSS, 4DM was lower than QPS and ECT progrms. Each program showed good correlation and the results showed statistically significant difference. However, in this way, it is determined to be compatible about the analysis value in which the large-scale side between the programs uses each program a difference in a clinical in the Bland-Altman analyzed result in spite of the good correlation and cannot use. but, this analyzed result will be able to be usefully used as the reference material for the clinical read and is expected.

  • PDF

Effect of the Changing the Lower Limits of Normal and the Interpretative Strategies for Lung Function Tests (폐기능검사 해석에 정상하한치 변화와 새 해석흐름도가 미치는 영향)

  • Ra, Seung Won;Oh, Ji Seon;Hong, Sang-Bum;Shim, Tae Sun;Lim, Chae Man;Koh, Youn Suck;Lee, Sang Do;Kim, Woo Sung;Kim, Dong-Soon;Kim, Won Dong;Oh, Yeon-Mok
    • Tuberculosis and Respiratory Diseases
    • /
    • v.61 no.2
    • /
    • pp.129-136
    • /
    • 2006
  • Background: To interpret lung function tests, it is necessary to determine the lower limits of normal (LLN) and to derive a consensus on the interpretative algorithm. '0.7 of LLN for the $FEV_1$/FVC' was suggested by the COPD International Guideline (GOLD) for defining obstructive disease. A consensus on a new interpretative algorithm was recently achieved by ATS/ERS in 2005. We evaluated the accuracy of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases, and we also determined the effect of the new algorithm on diagnosing ventilatory defects. Methods: We obtained the age, gender, height, weight, $FEV_1$, FVC, and $FEV_1$/FVC from 7362 subjects who underwent spirometry in 2005 at the Asan Medical Center, Korea. For diagnosing obstructive diseases, the accuracy of '0.7 of LLN for the $FEV_1$/FVC' was evaluated in reference to the $5^{th}$ percentile of the LLN. By applying the new algorithm, we determined how many more subjects should have lung volumes testing performed. Evaluation of 1611 patients who had lung volumes testing performed as well as spirometry during the period showed how many more subjects were diagnosed with obstructive diseases according to the new algorithm. Results: 1) The sensitivity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases increased according to age, but the specificity was decreased according to age; the positive predictive value decreased, but the negative predictive value increased. 2) By applying the new algorithm, 34.5% (2540/7362) more subjects should have lung volumes testing performed. 3) By applying the new algorithm, 13% (205/1611) more subjects were diagnosed with obstructive diseases; these subjects corresponded to 30% (205/681) of the subjects who had been diagnosed with restrictive diseases by the old interpretative algorithm. Conclusion: The sensitivity and specificity of '0.7 of LLN for the $FEV_1$/FVC' for diagnosing obstructive diseases changes according to age. By applying the new interpretative algorithm, it was shown that more subjects should have lung volumes testing performed, and there was a higher probability of being diagnosed with obstructive diseases.

Service Quality, Customer Satisfaction and Customer Loyalty of Mobile Communication Industry in China (중국이동통신산업중적복무질량(中国移动通信产业中的服务质量), 고객만의도화고객충성도(顾客满意度和顾客忠诚度))

  • Zhang, Ruijin;Li, Xiangyang;Zhang, Yunchang
    • Journal of Global Scholars of Marketing Science
    • /
    • v.20 no.3
    • /
    • pp.269-277
    • /
    • 2010
  • Previous studies have shown that the most important factor affecting customer loyalty in the service industry is service quality. However, on the subject of whether service quality has a direct or indirect effect on customer loyalty, scholars' views apparently vary. Some studies suggest that service quality has a direct and fundamental influence on customer loyalty (Bai and Liu, 2002). However, others have shown that service quality not only directly affects customer loyalty, it also has an indirect impact on customer loyalty by influencing customer satisfaction and perceived value (Cronin, Brady, and Hult, 2000). Currently, there are few domestic articles that specifically address the relationship between service quality and customer loyalty in the mobile communication industry. Moreover, research has studied customer loyalty as a whole variable, rather than breaking it down further into multiple dimensions. Based on this analysis, this paper summarizes previous study results, establishes an effect mechanism model among service quality, customer satisfaction, and customer loyalty in the mobile communication industry, and presents a statistical test on model assumptions by using customer investigation data from Heilongjiang Mobile Company. It provides theoretical guidance for mobile service management based on the discussion of the hypothesis test results. For data collection, the sample comprised mobile users in Harbin city, and the survey was taken by random sampling. Out of a total of 300 questionnaires, 276 (92.9%) were recovered. After excluding invalid questionnaires, 249 remained, for an effective rate of 82.6 percent for the study. Cronbach's ${\alpha}$ coefficient was adapted to assess the scale reliability, and validity testing was conducted on the questionnaire from three aspects: content validity, construct validity. and convergent validity. The study tested for goodness of fit mainly from the absolute and relative fit indexes. From the hypothesis testing results, overall, four assumptions have not been supported. The ultimate affective relationship of service quality, customer satisfaction, and customer loyalty is demonstrated in Figure 2. On the whole, the service quality of the communication industry not only has a direct positive significant effect on customer loyalty, it also has an indirect positive significant effect on customer loyalty through service quality; the affective mechanism and extent of customer loyalty are different, and are influenced by each dimension of service quality. This study used the questionnaires of existing literature from home and abroad and tested them in empirical research, with all questions adapted to seven-point Likert scales. With the SERVQUAL scale of Parasuraman, Zeithaml, and Berry (1988), or PZB, as a reference point, service quality was divided into five dimensions-tangibility, reliability, responsiveness, assurance, and empathy-and the questions were simplified down to nineteen. The measurement of customer satisfaction was based mainly on Fornell (1992) and Wang and Han (2003), ending up with four questions. Based on the study’s three indicators of price tolerance, first choice, and complaint reaction were used to measure attitudinal loyalty, while repurchase intention, recommendation, and reputation measured behavioral loyalty. The collection and collation of literature data produced a model of the relationship among service quality, customer satisfaction, and customer loyalty in mobile communications, and China Mobile in the city of Harbin in Heilongjiang province was used for conducting an empirical test of the model and obtaining some useful conclusions. First, service quality in mobile communication is formed by the five factors mentioned earlier: tangibility, reliability, responsiveness, assurance, and empathy. On the basis of PZB SERVQUAL, the study designed a measurement scale of service quality for the mobile communications industry, and obtained these five factors through exploratory factor analysis. The factors fit basically with the five elements, indicating the concept of five elements of service quality for the mobile communications industry. Second, service quality in mobile communications has both direct and indirect positive effects on attitudinal loyalty, with the indirect effect being produced through the intermediary variable, customer satisfaction. There are also both direct and indirect positive effects on behavioral loyalty, with the indirect effect produced through two intermediary variables: customer satisfaction and attitudinal loyalty. This shows that better service quality and higher customer satisfaction will activate the attitudinal to service providers more active and show loyalty to service providers much easier. In addition, the effect mechanism of all dimensions of service quality on all dimensions of customer loyalty is different. Third, customer satisfaction plays a significant intermediary role among service quality and attitudinal and behavioral loyalty, indicating that improving service quality can boost customer satisfaction and make it easier for satisfied customers to become loyal customers. Moreover, attitudinal loyalty plays a significant intermediary role between service quality and behavioral loyalty, indicating that only attitudinally and behaviorally loyal customers are truly loyal customers. The research conclusions have some indications for Chinese telecom operators and others to upgrade their service quality. Two limitations to the study are also mentioned. First, all data were collected in the Heilongjiang area, so there might be a common method bias that skews the results. Second, the discussion addresses the relationship between service quality and customer loyalty, setting customer satisfaction as mediator, but does not consider other factors, like customer value and consumer features, This research will be continued in the future.

A Study on Recent Research Trend in Management of Technology Using Keywords Network Analysis (키워드 네트워크 분석을 통해 살펴본 기술경영의 최근 연구동향)

  • Kho, Jaechang;Cho, Kuentae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.2
    • /
    • pp.101-123
    • /
    • 2013
  • Recently due to the advancements of science and information technology, the socio-economic business areas are changing from the industrial economy to a knowledge economy. Furthermore, companies need to do creation of new value through continuous innovation, development of core competencies and technologies, and technological convergence. Therefore, the identification of major trends in technology research and the interdisciplinary knowledge-based prediction of integrated technologies and promising techniques are required for firms to gain and sustain competitive advantage and future growth engines. The aim of this paper is to understand the recent research trend in management of technology (MOT) and to foresee promising technologies with deep knowledge for both technology and business. Furthermore, this study intends to give a clear way to find new technical value for constant innovation and to capture core technology and technology convergence. Bibliometrics is a metrical analysis to understand literature's characteristics. Traditional bibliometrics has its limitation not to understand relationship between trend in technology management and technology itself, since it focuses on quantitative indices such as quotation frequency. To overcome this issue, the network focused bibliometrics has been used instead of traditional one. The network focused bibliometrics mainly uses "Co-citation" and "Co-word" analysis. In this study, a keywords network analysis, one of social network analysis, is performed to analyze recent research trend in MOT. For the analysis, we collected keywords from research papers published in international journals related MOT between 2002 and 2011, constructed a keyword network, and then conducted the keywords network analysis. Over the past 40 years, the studies in social network have attempted to understand the social interactions through the network structure represented by connection patterns. In other words, social network analysis has been used to explain the structures and behaviors of various social formations such as teams, organizations, and industries. In general, the social network analysis uses data as a form of matrix. In our context, the matrix depicts the relations between rows as papers and columns as keywords, where the relations are represented as binary. Even though there are no direct relations between papers who have been published, the relations between papers can be derived artificially as in the paper-keyword matrix, in which each cell has 1 for including or 0 for not including. For example, a keywords network can be configured in a way to connect the papers which have included one or more same keywords. After constructing a keywords network, we analyzed frequency of keywords, structural characteristics of keywords network, preferential attachment and growth of new keywords, component, and centrality. The results of this study are as follows. First, a paper has 4.574 keywords on the average. 90% of keywords were used three or less times for past 10 years and about 75% of keywords appeared only one time. Second, the keyword network in MOT is a small world network and a scale free network in which a small number of keywords have a tendency to become a monopoly. Third, the gap between the rich (with more edges) and the poor (with fewer edges) in the network is getting bigger as time goes on. Fourth, most of newly entering keywords become poor nodes within about 2~3 years. Finally, keywords with high degree centrality, betweenness centrality, and closeness centrality are "Innovation," "R&D," "Patent," "Forecast," "Technology transfer," "Technology," and "SME". The results of analysis will help researchers identify major trends in MOT research and then seek a new research topic. We hope that the result of the analysis will help researchers of MOT identify major trends in technology research, and utilize as useful reference information when they seek consilience with other fields of study and select a new research topic.

A Study on the Useful Trend of Plants Related to Landscape and How to Plant and Cultivate Through 'ImwonGyeongjaeji(林園經濟志)' ('임원경제지'를 통해 본 식물의 이용경향과 종예법(種藝法))

  • Shin, Sang-Sup
    • Korean Journal of Heritage: History & Science
    • /
    • v.45 no.4
    • /
    • pp.140-157
    • /
    • 2012
  • The result of a study on the useful trend of plants related to landscape and how to plant and cultivate through 'ImwonGyeongjaeji Manhakji'of Seoyugu is as follows: First, 'ImwonGyeongjaiji Manhakji', composed of total 5 volumes (General, Fruit trees, vegetables and creeper, plants, others) is a representative literature related to landscape which described the names of plants and varieties, soil condition, how to plant and cultivate, graft, how to prevent the insect attack etc systematically. Second, he recorded the tree planting as Jongjae(種栽) or Jaesik(栽植), and the period to plant the trees as Jaesusihoo(栽樹時候), transplanting as Yijae(移栽), making the fence as Jakwonri(作園籬), the names of varietieis as Myeongpoom(名品), the suitable soil as Toeui(土宜), planting and cultivation as Jongye(種藝), treatment as Euichi(醫治), protection and breeding as Hoyang(護養), garden as Jeongwon(庭園) or Wonpo(園圃), garden manager as Poja(圃者) or Wonjeong(園丁). Third, the appearance frequency of plants was analyzed in the order of flowers, fruits, trees, and creepers and it showed that the gravity of deciduous trees was 3.7 times higher than that of evergreen trees. The preference of flower and trees, fruit trees and deciduous trees and broad-leaved trees includes (1) application of the species of naturally growing trees which are harmonized with the natural environment (2) Aesthetic value which enables to enjoy the beauty of season, (3) the trend of public welfare to take the flowers and fruits, (4) the use of symbolic elements based on the value reference of Neo-Confucianism etc. Fourth, he suggested the optimal planting period as January(上時) and emphasized to transplant by adding lots of fertile soil and cover up the seeds with soil as high as they are buried in accordance with the growing direction and protect them with a support. That is, considering the fact that he described the optimal planting period as January by lunar calendar, this suggests the hints in judging the planting period today. For planting the seeds, he recommended the depth with 1 chi(寸 : approx. 3.3cm), and for planting a cutting, he recommended to plant the finger-thick branch with depth 5 chi(approx. 16.5cm) between January and February. In case of graft of fruit trees, he described that if used the branch stretched to the south, you would get a lot of fruit and if cut the branches in January, the fruits would be appetizing and bigger. Fifth, the hedge(fence tree) is made by seeding the Jujube tree(Zizyphus jujuba var. inermis) in autumn densely and transplanting the jujube tree with 1 ja(尺 : approx. 30cm) interval in a row in next autumn and then binding them with the height of 7 ja(approx. 210cm) in the spring of next year. If planted by mixing a Elm tree(Ulmus davidiana var. japonica) and a Willow(Salix koreensis), the hedge whose branch and leaves are unique and beautiful like a grating can be made. For the hedge(fence tree), he recommended Trifoliolate orange(Poncitus trifoliata), Rose of sharon(Hibiscus syriacus), Willow(Salix koreensis), Spindle tree(Euonymus japonica), Cherry tree(Prunus tomentosa), Acanthopanax tree(Acanthopanax sessiliflorus), Japanese apricot tree(Prunus mume), Chinese wolf berry(Lycium chinense), Cornelian tree(Cornus officinalis), Gardenia(Gardenia jasminoides for. Grandiflora), Mulberry(Morus alba), Wild rosebush(Rosa multiflora) etc.

A Study on the Overall Economic Risks of a Hypothetical Severe Accident in Nuclear Power Plant Using the Delphi Method (델파이 기법을 이용한 원전사고의 종합적인 경제적 리스크 평가)

  • Jang, Han-Ki;Kim, Joo-Yeon;Lee, Jai-Ki
    • Journal of Radiation Protection and Research
    • /
    • v.33 no.4
    • /
    • pp.127-134
    • /
    • 2008
  • Potential economic impact of a hypothetical severe accident at a nuclear power plant(Uljin units 3/4) was estimated by applying the Delphi method, which is based on the expert judgements and opinions, in the process of quantifying uncertain factors. For the purpose of this study, it is assumed that the radioactive plume directs the inland direction. Since the economic risk can be divided into direct costs and indirect effects and more uncertainties are involved in the latter, the direct costs were estimated first and the indirect effects were then estimated by applying a weighting factor to the direct cost. The Delphi method however subjects to risk of distortion or discrimination of variables because of the human behavior pattern. A mathematical approach based on the Bayesian inferences was employed for data processing to improve the Delphi results. For this task, a model for data processing was developed. One-dimensional Monte Carlo Analysis was applied to get a distribution of values of the weighting factor. The mean and median values of the weighting factor for the indirect effects appeared to be 2.59 and 2.08, respectively. These values are higher than the value suggested by OECD/NEA, 1.25. Some factors such as small territory and public attitude sensitive to radiation could affect the judgement of panel. Then the parameters of the model for estimating the direct costs were classified as U- and V-types, and two-dimensional Monte Carlo analysis was applied to quantify the overall economic risk. The resulting median of the overall economic risk was about 3.9% of the gross domestic products(GDP) of Korea in 2006. When the cost of electricity loss, the highest direct cost, was not taken into account, the overall economic risk was reduced to 2.2% of GDP. This assessment can be used as a reference for justifying the radiological emergency planning and preparedness.