• Title/Summary/Keyword: 논문평가

Search Result 34,110, Processing Time 0.07 seconds

Image Quality Evaluation of CsI:Tl and Gd2O2S Detectors in the Indirect-Conversion DR System (간접변환방식 DR장비에서 CsI:Tl과 Gd2O2S의 검출기 화질 평가)

  • Kong, Changgi;Choi, Namgil;Jung, Myoyoung;Song, Jongnam;Kim, Wook;Han, Jaebok
    • Journal of the Korean Society of Radiology
    • /
    • v.11 no.1
    • /
    • pp.27-35
    • /
    • 2017
  • The purpose of this study was to investigate the features of CsI:Tl and $Gd_2O_2S$ detectors with an indirect conversion method using phantom in the DR (digital radiography) system by obtaining images of thick chest phantom, medium thickness thigh phantom, and thin hand phantom and by analyzing the SNR and CNR. As a result of measuring the SNR and CNR according to the thickness change of the subject, the SNR and CNR were higher in CsI:Tl detector than in $Gd_2O_2S$ detector when the medium thickness thigh phantom and thin hand phantom were scanned. However, when the thick chest phantom was used, for the SNR at 80~125 kVp and the CNR at 80~110 kVp in the $Gd_2O_2S$ detector, the values were higher than those of CsI:Tl detector. The SNR and CNR both increased as the tube voltage increased. The SNR and CNR of CsI:Tl detector in the medium thickness thigh phantom increased at 40~50 kVp and decreased as the tube voltage increased. The SNR and CNR of $Gd_2O_2S$ detector increased at 40~60 kVp and decreased as the tube voltage increased. The SNR and CNR of CsI:Tl detctor in the thin hand phantom decreased at the low tube voltage and increased as the tube voltage increased, but they decreased again at 100~110 kVp, while the SNR and CNR of $Gd_2O_2S$ detector were found to decrease as the tube voltage increased. The MTF of CsI:Tl detector was 6.02~90.90% higher than that of $Gd_2O_2S$ detector at 0.5~3 lp/mm. The DQE of CsI:Tl detector was 66.67~233.33% higher than that of $Gd_2O_2S$ detector. In conclusion, although the values of CsI:Tl detector were higher than those of $Gd_2O_2S$ detector in the comparison of MTF and DQE, the cheaper $Gd_2O_2S$ detector had higher SNR and CNR than the expensive CsI:Tl detector at a certain tube voltage range in the thick check phantom. At chest X-ray, if the $Gd_2O_2S$ detector is used rather than the CsI:Tl detector, chest images with excellent quality can be obtained, which will be useful for examination. Moreover, price/performance should be considered when determining the detector type from the viewpoint of the user.

The Diagnostic Yield and Complications of Percutaneous Needle Aspiration Biopsy for the Intrathoracic Lesions (경피적 폐생검의 진단성적 및 합병증)

  • Jang, Seung Hun;Kim, Cheal Hyeon;Koh, Won Jung;Yoo, Chul-Gyu;Kim, Young Whan;Han, Sung Koo;Shim, Young-Soo
    • Tuberculosis and Respiratory Diseases
    • /
    • v.43 no.6
    • /
    • pp.916-924
    • /
    • 1996
  • Bacground : Percutaneous needle aspiration biopsy (PCNA) is one of the most frequently used diagnostic methcxJs for intrathoracic lesions. Previous studies have reponed wide range of diagnostic yield from 28 to 98%. However, diagnostic yield has been increased by accumulation of experience, improvement of needle and the image guiding systems. We analysed the results of PCNA performed for one year to evaluate the diagnostic yield, the rate and severity of complications and factors affecting the diagnostic yield. Method : 287 PCNAs undergone in 236 patients from January, 1994 to December, 1994 were analysed retrospectively. The intrathoracic lesions was targeted and aspirated with 21 - 23 G Chiba needle under fluoroscopic guiding system. Occasionally, 19 - 20 G Biopsy gun was used for core tissue specimen. The specimen was requested for microbiologic, cytologic and histopathologic examination in the case of obtained core tissue. Diagnostic yields and complication rate of benign and malignant lesions were ca1culaled based on patients' chans. The comparison for the diagnostic yields according to size and shape of the lesions was analysed with chi square test (p<0.05). Results : There are 19.9% of consolidative lesion and 80.1% of nodular or mass lesion, and the lesion is located at the right upper lobe in 26.3% of cases, the right middle lobe in 6.4%, the right lower lobe 21.2%, the left upper lobe in 16.8%, the left lower lobe in 10.6%, and mediastinum in 1.3%. The lesion distributed over 2 lobes is as many as 17.4% of cases. There are 74 patients with benign lesions, 142 patients with malignant lesions in final diagnosis and confirmative diagnosis was not made in 22 patients despite of all available diagnostic methods. 2 patients have lung cancer and pulmonary tuberculosis concomittantly. Experience with 236 patients showed that PCNA can diagnose benign lesions in 62.2% (42 patients) of patients with such lesions and malignant lesions in 82.4% (117 patients) of patients. For the patients in whom the first PCNA failed to make diagnosis, the procedure was repeated and the cumulative diagnostic yield was increased as 44.6%, 60.8%, 62.2% in benign lesions and as 73.4%, 81.7%, 82.4% in malignant lesions through serial PCNA. Thoracotomy was performed in 9 patients with benign lesions and in 43 patients with malignant lesions. PCNA and thoracotomy showed the same pathologic result in 44.4% (4 patients) of benign lesions and 58.1% (25 patients) of malignant lesions. Thoracotomy confirmed 4 patients with malignat lesions against benign result of PCNA and 2 patients with benign lesions against malignant result of PCNA. There are 1.0% (3 cases) of hemoptysis, 19.2% (55 cases) of blood tinged sputum, 12.5% (36 cases) of pneumothorax and 1.0% (3 cases) of fever through 287 times of PCNA. Hemoptysis and blood tinged sputum didn't need therapy. 8 cases of pneumothorax needed insertion of classical chest tube or pig-tail catheter. Fever subsided within 48 hours in all cases. There was no difference between size and shape of lesion with diagnostic yield. Conclusion: PCNA shows relatively high diagnostic yield and mild degree complications but the accuracy of histologic diagnosis has to be improved.

  • PDF

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Performance of Korean State-owned Enterprises Following Executive Turnover and Executive Resignation During the Term of Office (공기업의 임원교체와 중도퇴임이 경영성과에 미치는 영향)

  • Yu, Seungwon;Kim, Suhee
    • KDI Journal of Economic Policy
    • /
    • v.34 no.3
    • /
    • pp.95-131
    • /
    • 2012
  • This study examines whether the executive turnover and the executive resignation during the term of office affect the performance of Korean state-owned enterprises. The executive turnover in the paper means the comprehensive change of the executives which includes the change after the term of office, the change after consecutive terms and the change during the term of office. The 'resignation' was named for the executive change during the term of office to distinguish from the executive turnover. The study scope of the paper is restrained to the comprehensive executive change itself irrespective of the term of office and the resignation during the term of office. Therefore the natural change of the executive after the term of office or the change after consecutive terms is not included in the study. Spontaneous resignation and forced resignation are not distinguished in the paper as the distinction between the two is not easy. The paper uses both the margin of return on asset and the margin of return on asset adjusted by industry as proxies of the performance of state-owned enterprises. The business nature of state-owned enterprise is considered in the study, the public nature not in it. The paper uses the five year (2004 to 2008) samples of 24 firms designated as public enterprises by Korean government. The analysis results are as follows. First, 45.1% of CEOs were changed a year during the sample period on the average. The average tenure period of CEOs was 2 years and 3 months and 49.9% among the changed CEOs resigned during the term of office. 41.6% of internal auditors were changed a year on the average. The average tenure period of internal auditors was 2 years and 2 months and 51.0% among the changed internal auditors resigned during the term of office. In case of outside directors, on average, 38.2% were changed a year. The average tenure period was 2 years and 7 months and 25.4% among the changed internal directors resigned during the term of office. These statistics show that numerous CEOs resigned before the finish of the three year term in office. Also, considering the tenure of an internal auditor and an outside director which diminished from 3 years to 2 years by an Act on the Management of Public Institutions (applied to the executives appointed since April 2007), it seems most internal auditors resigned during the term of office but most outside directors resigned after the end of the term. Secondly, There was no evidence that the executives were changed during the term of office because of the bad performance of prior year. On the other hand, contrary to the normal expectation, the performance of prior year of the state-owned enterprise where an outside director resigned during the term of office was significantly higher than that of other state-owned enterprises. It means that the clauses in related laws on the executive dismissal on grounds of bad performance did not work normally. Instead it can be said that the executive change was made by non-economic reasons such as a political motivation. Thirdly, the results from a fixed effect model show there were evidences that performance turned negatively when CEOs or outside directors resigned during the term of office. CEO's resignation during the term of office gave a significantly negative effect on the margin of return on asset. Outside director's resignation during the term of office lowered significantly the margin of return on asset adjusted by industry. These results suggest that the executive's change in Korean state-owned enterprises was not made by objective or economic standards such as management performance assessment and the negative effect on performance of the enterprises was had by the unfaithful obeyance of the legal executive term.

  • PDF

Investigation on a Way to Maximize the Productivity in Poultry Industry (양계산업에 있어서 생산성 향상방안에 대한 조사 연구)

  • 오세정
    • Korean Journal of Poultry Science
    • /
    • v.16 no.2
    • /
    • pp.105-127
    • /
    • 1989
  • Although poultry industry in Japan has been much developed in recent years, it still needs to be developed , compared with developed countries. Since the poultry market in Korea is expected to be opened in the near future it is necessary to maximize the Productivity to reduce the production costs and to develop the scientific, technologies and management organization systems for the improvement of the quality in poultry production. Followings ale the summary of poultry industry in Japan. 1. Poultry industry in Japan is almost specized and commercialized and its management system is : integrated, cooperative and developed to industrialized intensive style. Therefore, they have competitive power in the international poultry markets. 2. Average egg weight is 48-50g per day (Max. 54g) and feed requirement is 2. 1-2. 3. 3. The management organization system is specialized and farmers in small scale form complex and farmers in large scale are integrated.

  • PDF

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

Analysis of the ESD and DAP According to the Change of the Cine Imaging Condition of Coronary Angiography and Usefulness of SNR and CNR of the Images: Focusing on the Change of Tube Current (관상동맥 조영술(Coronary Angiography)의 씨네(cine) 촬영조건 변화에 따른 입사표면선량(ESD)과 흡수선량(DAP) 및 영상의 SNR·CNR 유용성 분석: 관전류 변화를 중점으로)

  • Seo, Young Hyun;Song, Jong Nam
    • Journal of the Korean Society of Radiology
    • /
    • v.13 no.3
    • /
    • pp.371-379
    • /
    • 2019
  • The purpose of this study was to investigate the effect of the change in the X-ray condition on the entrance surface dose (ESD) and dose area product (DAP) in the cine imaging of coronary angiography (CAG), and to analyze the usefulness of the condition change on the dose relation and image quality by measuring and analyzing the Signal to Noise Radio (SNR) and Contrast to Nois Ratio (CNR) of the angiographic images taken by the Image J program. Data were collected from 33 patients (24 males and 9 females) who underwent CAG at this hospital from November 2017 to March 2018. In terms of imaging condition and data acquisition, the ESD and DAP of group A with a high tube current of 397.2 mA and group B with a low tube current of 370.7 mA were retrospectively obtained for comparison and analysis. For the SNR and CNR measurement and analysis via Image J, the result values were derived by substituting the obtained data into the formula. The correlations among ESD and DAP according to the change in the imaging condition, SNR, and CNR were analyzed by using the SPSS statistical analysis software. The relationships of groups A and B, having a difference in the imaging condition, mA, with ESD ($A:483.5{\pm}60.1$; $B: 464.4{\pm}39.9$) and DAP ($A:84.3{\pm}10.7$; $B:81.5{\pm}7$) were not statistically significant (p>0.05). In the relationships with SNR and CNR based on Image J, the SNR ($5.451{\pm}0.529$) and CNR ($0.411{\pm}0.0432$) of the images obtained via the left coronary artery (LCA) imaging of group B showed differences of $0.475{\pm}0.096$ and $-0.048{\pm}0.0$, respectively, from the SNR ($4.976{\pm}0.433$) and CNR ($0.459{\pm}0.0431$) of the LCA of group A. However, the differences were not statistically significant (p<0.05). In the SNR and CNR obtained via the right coronary artery (RCA) imaging, the SNR ($4.731{\pm}0.773$) and CNR ($0.354{\pm}0.083$) of group A showed increased values of $1.491{\pm}0.405$ and $0.188{\pm}0.005$, respectively, from the SNR ($3.24{\pm}0.368$) and CNR ($0.166{\pm}0.033$) of group B. Among these, CNR was statistically significant (p<0.05). In the correlation analysis, statistically significant results were shown in SNR (LCA) and CNR (LCA); SNR (RCA) and CNR (RCA); ESD and DAP; ESD and sec; DAP and CNR (RCA); and DAP and sec (p<0.05). As a result of the analyses on the image quality evaluation and usefulness of the dose change, the SNR and CNR were increased in the RCA images of the CAG obtained by increasing the mA. Based on the result that CNR showed a statistically significant difference, it is believed that the contrast in the image quality can be further improved by increasing the mA in RCA imaging.

Edge to Edge Model and Delay Performance Evaluation for Autonomous Driving (자율 주행을 위한 Edge to Edge 모델 및 지연 성능 평가)

  • Cho, Moon Ki;Bae, Kyoung Yul
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.191-207
    • /
    • 2021
  • Up to this day, mobile communications have evolved rapidly over the decades, mainly focusing on speed-up to meet the growing data demands of 2G to 5G. And with the start of the 5G era, efforts are being made to provide such various services to customers, as IoT, V2X, robots, artificial intelligence, augmented virtual reality, and smart cities, which are expected to change the environment of our lives and industries as a whole. In a bid to provide those services, on top of high speed data, reduced latency and reliability are critical for real-time services. Thus, 5G has paved the way for service delivery through maximum speed of 20Gbps, a delay of 1ms, and a connecting device of 106/㎢ In particular, in intelligent traffic control systems and services using various vehicle-based Vehicle to X (V2X), such as traffic control, in addition to high-speed data speed, reduction of delay and reliability for real-time services are very important. 5G communication uses high frequencies of 3.5Ghz and 28Ghz. These high-frequency waves can go with high-speed thanks to their straightness while their short wavelength and small diffraction angle limit their reach to distance and prevent them from penetrating walls, causing restrictions on their use indoors. Therefore, under existing networks it's difficult to overcome these constraints. The underlying centralized SDN also has a limited capability in offering delay-sensitive services because communication with many nodes creates overload in its processing. Basically, SDN, which means a structure that separates signals from the control plane from packets in the data plane, requires control of the delay-related tree structure available in the event of an emergency during autonomous driving. In these scenarios, the network architecture that handles in-vehicle information is a major variable of delay. Since SDNs in general centralized structures are difficult to meet the desired delay level, studies on the optimal size of SDNs for information processing should be conducted. Thus, SDNs need to be separated on a certain scale and construct a new type of network, which can efficiently respond to dynamically changing traffic and provide high-quality, flexible services. Moreover, the structure of these networks is closely related to ultra-low latency, high confidence, and hyper-connectivity and should be based on a new form of split SDN rather than an existing centralized SDN structure, even in the case of the worst condition. And in these SDN structural networks, where automobiles pass through small 5G cells very quickly, the information change cycle, round trip delay (RTD), and the data processing time of SDN are highly correlated with the delay. Of these, RDT is not a significant factor because it has sufficient speed and less than 1 ms of delay, but the information change cycle and data processing time of SDN are factors that greatly affect the delay. Especially, in an emergency of self-driving environment linked to an ITS(Intelligent Traffic System) that requires low latency and high reliability, information should be transmitted and processed very quickly. That is a case in point where delay plays a very sensitive role. In this paper, we study the SDN architecture in emergencies during autonomous driving and conduct analysis through simulation of the correlation with the cell layer in which the vehicle should request relevant information according to the information flow. For simulation: As the Data Rate of 5G is high enough, we can assume the information for neighbor vehicle support to the car without errors. Furthermore, we assumed 5G small cells within 50 ~ 250 m in cell radius, and the maximum speed of the vehicle was considered as a 30km ~ 200 km/hour in order to examine the network architecture to minimize the delay.

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

Preservation of World Records Heritage in Korea and Further Registry (한국의 세계기록유산 보존 현황 및 과제)

  • Kim, Sung-Soo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.5 no.2
    • /
    • pp.27-48
    • /
    • 2005
  • This study investigates the current preservation and management of four records and documentary heritage in Korea that is in the UNESCO's Memory of the World Register. The study analyzes their problems and corresponding solutions in digitizing those world records heritages. This study also reviews additional four documentary books in Korea that are in the wish list to add to UNESCO's Memory of the World Register. This study is organized as the following: Chapter 2 examines the value and meanings of world records and documentary heritage in Korea. The registry requirements and procedures of UNESCO's Memory of the World Register are examined. The currently registered records of Korea include Hunmin-Chongum, the Annals of the Choson Dynasty, the Diaries of the Royal Secretariat (Seungjeongwon Ilgi), and Buljo- Jikji-Simche-Yojeol (vol. II). These records heritage's worth and significance are carefully analyzed. For example, Hunmin-Chongum("訓民正音") is consisted of unique and systematic letters. Letters were delicately explained with examples in its original manual at the time of letter's creation, which is an unparalleled case in the world documentary history. The Annals of the Choson Dynasty("朝鮮王朝實錄") are the most comprehensive historic documents that contain the longest period of time in history. Their truthfulness and reliability in describing history give credits to the annals. The Royal Secretariat Diary (called Seungjeongwon-Ilgi("承政院日記")) is the most voluminous primary resources in history, superior to the Annals of Choson Dynasty and Twenty Five Histories in China. Jikji("直指") is the oldest existing book published by movable metal print sets in the world. It evidences the beginning of metal printing in the world printing history and is worthy of being as world heritage. The review of the four registered records confirms that they are valuable world documentary heritage that transfers culture of mankind to next generations and should be preserved carefully and safely without deterioration or loss. Chapter 3 investigates the current status of preservation and management of three repositories that store the four registered records in Korea. The repositories include Kyujanggak Archives in Seoul National University, Pusan Records and Information Center of National Records and Archives Service, and Gansong Art Museum. The quality of their preservation and management are excellent in all of three institutions by the following aspects: 1) detailed security measures are close to perfection 2) archiving practices are very careful by using a special stack room in steady temperature and humidity and depositing it in stack or archival box made of paulownia tree and 3) fire prevention, lighting, and fumigation are thoroughly prepared. Chapter 4 summarizes the status quo of digitization projects of records heritage in Korea. The most important issue related to digitization and database construction on Korean records heritage is likely to set up the standardization of digitization processes and facilities. It is urgently necessary to develop comprehensive standard systems for digitization. Two institutions are closely interested in these tasks: 1) the National Records and Archives Service experienced in developing government records management systems; and 2) the Cultural Heritage Administration interested in digitization of Korean old documents. In collaboration of these two institutions, a new standard system will be designed for digitizing records heritage on Korean Studies. Chapter 5 deals with additional Korean records heritage in the wish list for UNESCO's Memory of the World Register, including: 1) Wooden Printing Blocks(經板) of Koryo-Taejangkyong(高麗大藏經) in Haein Temple(海印寺); 2) Dongui-Bogam("東醫寶鑑") 3) Samguk-Yusa("三國遺事") and 4) Mugujeonggwangdaedaranigyeong. Their world value and importance are examined as followings. Wooden Printing Blocks of Koryo-Taejangkyong in Haein Temple is the worldly oldest wooden printing block of cannon of Buddhism that still exist and was created over 750 years ago. It needs a special conservation treatment to disinfect germs residing in surface and inside of wooden plates. Otherwise, it may be damaged seriously. For its effective conservation and preservation, we hope that UNESCO and Government will schedule special care and budget and join the list of Memory of the Word Register. Dongui-Bogam is the most comprehensive and well-written medical book in the Korean history, summarizing all medical books in Korea and China from the Ancient Times through the early 17th century and concentrating on Korean herb medicine and prescriptions. It is proved as the best clinical guidebook in the 17th century for doctors and practitioners to easily use. The book was also published in China and Japan in the 18th century and greatly influenced the development of practical clinic and medical research in Asia at that time. This is why Dongui Bogam is in the wish list to register to the Memory of the World. Samguk-Yusa is evaluated as one of the most comprehensive history books and treasure sources in Korea, which illustrates foundations of Korean people and covers histories and cultures of ancient Korean peninsula and nearby countries. The book contains the oldest fixed form verse, called Hyang-Ka(鄕歌), and became the origin of Korean literature. In particular, the section of Gi-ee(紀異篇) describes the historical processes of dynasty transition from the first dynasty Gochosun(古朝鮮) to Goguryeo(高句麗) and illustrates the identity of Korean people from its historical origin. This book is worthy of adding to the Memory of the World Register. Mugujeonggwangdaedaranigyeong is the oldest book printed by wooden type plates, and it is estimated to print in between 706 and 751. It contains several reasons and evidence to be worthy of adding to the list of the Memory of the World. It is the greatest documentary heritage that represents the first wooden printing book that still exists in the world as well as illustrates the history of wooden printing in Korea.