• Title/Summary/Keyword: Level Set method

Search Result 1,481, Processing Time 0.029 seconds

A Study on the Prediction Model of Stock Price Index Trend based on GA-MSVM that Simultaneously Optimizes Feature and Instance Selection (입력변수 및 학습사례 선정을 동시에 최적화하는 GA-MSVM 기반 주가지수 추세 예측 모형에 관한 연구)

  • Lee, Jong-sik;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.4
    • /
    • pp.147-168
    • /
    • 2017
  • There have been many studies on accurate stock market forecasting in academia for a long time, and now there are also various forecasting models using various techniques. Recently, many attempts have been made to predict the stock index using various machine learning methods including Deep Learning. Although the fundamental analysis and the technical analysis method are used for the analysis of the traditional stock investment transaction, the technical analysis method is more useful for the application of the short-term transaction prediction or statistical and mathematical techniques. Most of the studies that have been conducted using these technical indicators have studied the model of predicting stock prices by binary classification - rising or falling - of stock market fluctuations in the future market (usually next trading day). However, it is also true that this binary classification has many unfavorable aspects in predicting trends, identifying trading signals, or signaling portfolio rebalancing. In this study, we try to predict the stock index by expanding the stock index trend (upward trend, boxed, downward trend) to the multiple classification system in the existing binary index method. In order to solve this multi-classification problem, a technique such as Multinomial Logistic Regression Analysis (MLOGIT), Multiple Discriminant Analysis (MDA) or Artificial Neural Networks (ANN) we propose an optimization model using Genetic Algorithm as a wrapper for improving the performance of this model using Multi-classification Support Vector Machines (MSVM), which has proved to be superior in prediction performance. In particular, the proposed model named GA-MSVM is designed to maximize model performance by optimizing not only the kernel function parameters of MSVM, but also the optimal selection of input variables (feature selection) as well as instance selection. In order to verify the performance of the proposed model, we applied the proposed method to the real data. The results show that the proposed method is more effective than the conventional multivariate SVM, which has been known to show the best prediction performance up to now, as well as existing artificial intelligence / data mining techniques such as MDA, MLOGIT, CBR, and it is confirmed that the prediction performance is better than this. Especially, it has been confirmed that the 'instance selection' plays a very important role in predicting the stock index trend, and it is confirmed that the improvement effect of the model is more important than other factors. To verify the usefulness of GA-MSVM, we applied it to Korea's real KOSPI200 stock index trend forecast. Our research is primarily aimed at predicting trend segments to capture signal acquisition or short-term trend transition points. The experimental data set includes technical indicators such as the price and volatility index (2004 ~ 2017) and macroeconomic data (interest rate, exchange rate, S&P 500, etc.) of KOSPI200 stock index in Korea. Using a variety of statistical methods including one-way ANOVA and stepwise MDA, 15 indicators were selected as candidate independent variables. The dependent variable, trend classification, was classified into three states: 1 (upward trend), 0 (boxed), and -1 (downward trend). 70% of the total data for each class was used for training and the remaining 30% was used for verifying. To verify the performance of the proposed model, several comparative model experiments such as MDA, MLOGIT, CBR, ANN and MSVM were conducted. MSVM has adopted the One-Against-One (OAO) approach, which is known as the most accurate approach among the various MSVM approaches. Although there are some limitations, the final experimental results demonstrate that the proposed model, GA-MSVM, performs at a significantly higher level than all comparative models.

A Study on the Risk Factors for Maternal and Child Health Care Program with Emphasis on Developing the Risk Score System (모자건강관리를 위한 위험요인별 감별평점분류기준 개발에 관한 연구)

  • 이광옥
    • Journal of Korean Academy of Nursing
    • /
    • v.13 no.1
    • /
    • pp.7-21
    • /
    • 1983
  • For the flexible and rational distribution of limited existing health resources based on measurements of individual risk, the socalled Risk Approach is being proposed by the World Health Organization as a managerial tool in maternal and child health care program. This approach, in principle, puts us under the necessity of developing a technique by which we will be able to measure the degree of risk or to discriminate the future outcomes of pregnancy on the basis of prior information obtainable at prenatal care delivery settings. Numerous recent studies have focussed on the identification of relevant risk factors as the Prior infer mation and on defining the adverse outcomes of pregnancy to be dicriminated, and also have tried on how to develope scoring system of risk factors for the quantitative assessment of the factors as the determinant of pregnancy outcomes. Once the scoring system is established the technique of classifying the patients into with normal and with adverse outcomes will be easily de veloped. The scoring system should be developed to meet the following four basic requirements. 1) Easy to construct 2) Easy to use 3) To be theoretically sound 4) To be valid In searching for a feasible methodology which will meet these requirements, the author has attempted to apply the“Likelihood Method”, one of the well known principles in statistical analysis, to develop such scoring system according to the process as follows. Step 1. Classify the patients into four groups: Group $A_1$: With adverse outcomes on fetal (neonatal) side only. Group $A_2$: With adverse outcomes on maternal side only. Group $A_3$: With adverse outcome on both maternal and fetal (neonatal) sides. Group B: With normal outcomes. Step 2. Construct the marginal tabulation on the distribution of risk factors for each group. Step 3. For the calculation of risk score, take logarithmic transformation of relative proport-ions of the distribution and round them off to integers. Step 4. Test the validity of the score chart. h total of 2, 282 maternity records registered during the period of January 1, 1982-December 31, 1982 at Ewha Womans University Hospital were used for this study and the“Questionnaire for Maternity Record for Prenatal and Intrapartum High Risk Screening”developed by the Korean Institute for Population and Health was used to rearrange the information on the records into an easy analytic form. The findings of the study are summarized as follows. 1) The risk score chart constructed on the basis of“Likelihood Method”ispresented in Table 4 in the main text. 2) From the analysis of the risk score chart it was observed that a total of 24 risk factors could be identified as having significant predicting power for the discrimination of pregnancy outcomes into four groups as defined above. They are: (1) age (2) marital status (3) age at first pregnancy (4) medical insurance (5) number of pregnancies (6) history of Cesarean sections (7). number of living child (8) history of premature infants (9) history of over weighted new born (10) history of congenital anomalies (11) history of multiple pregnancies (12) history of abnormal presentation (13) history of obstetric abnormalities (14) past illness (15) hemoglobin level (16) blood pressure (17) heart status (18) general appearance (19) edema status (20) result of abdominal examination (21) cervix status (22) pelvis status (23) chief complaints (24) Reasons for examination 3) The validity of the score chart turned out to be as follows: a) Sensitivity: Group $A_1$: 0.75 Group $A_2$: 0.78 Group $A_3$: 0.92 All combined : 0.85 b) Specificity : 0.68 4) The diagnosabilities of the“score chart”for a set of hypothetical prevalence of adverse outcomes were calculated as follows (the sensitivity“for all combined”was used). Hypothetidal Prevalence : 5% 10% 20% 30% 40% 50% 60% Diagnosability : 12% 23% 40% 53% 64% 75% 80%.

  • PDF

A Polarization-based Frequency Scanning Interferometer and the Measurement Processing Acceleration based on Parallel Programing (편광 기반 주파수 스캐닝 간섭 시스템 및 병렬 프로그래밍 기반 측정 고속화)

  • Lee, Seung Hyun;Kim, Min Young
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.253-263
    • /
    • 2013
  • Frequency Scanning Interferometry(FSI) system, one of the most promising optical surface measurement techniques, generally results in superior optical performance comparing with other 3-dimensional measuring methods as its hardware structure is fixed in operation and only the light frequency is scanned in a specific spectral band without vertical scanning of the target surface or the objective lens. FSI system collects a set of images of interference fringe by changing the frequency of light source. After that, it transforms intensity data of acquired image into frequency information, and calculates the height profile of target objects with the help of frequency analysis based on Fast Fourier Transform(FFT). However, it still suffers from optical noise on target surfaces and relatively long processing time due to the number of images acquired in frequency scanning phase. 1) a Polarization-based Frequency Scanning Interferometry(PFSI) is proposed for optical noise robustness. It consists of tunable laser for light source, ${\lambda}/4$ plate in front of reference mirror, ${\lambda}/4$ plate in front of target object, polarizing beam splitter, polarizer in front of image sensor, polarizer in front of the fiber coupled light source, ${\lambda}/2$ plate between PBS and polarizer of the light source. Using the proposed system, we can solve the problem of fringe image with low contrast by using polarization technique. Also, we can control light distribution of object beam and reference beam. 2) the signal processing acceleration method is proposed for PFSI, based on parallel processing architecture, which consists of parallel processing hardware and software such as Graphic Processing Unit(GPU) and Compute Unified Device Architecture(CUDA). As a result, the processing time reaches into tact time level of real-time processing. Finally, the proposed system is evaluated in terms of accuracy and processing speed through a series of experiment and the obtained results show the effectiveness of the proposed system and method.

Quantitative Study of Annular Single-Crystal Brain SPECT (원형단일결정을 이용한 SPECT의 정량화 연구)

  • 김희중;김한명;소수길;봉정균;이종두
    • Progress in Medical Physics
    • /
    • v.9 no.3
    • /
    • pp.163-173
    • /
    • 1998
  • Nuclear medicine emission computed tomography(ECT) can be very useful to diagnose early stage of neuronal diseases and to measure theraputic results objectively, if we can quantitate energy metabolism, blood flow, biochemical processes, or dopamine receptor and transporter using ECT. However, physical factors including attenuation, scatter, partial volume effect, noise, and reconstruction algorithm make it very difficult to quantitate independent of type of SPECT. In this study, we quantitated the effects of attenuation and scatter using brain SPECT and three-dimensional brain phantom with and without applying their correction methods. Dual energy window method was applied for scatter correction. The photopeak energy window and scatter energy window were set to 140ke${\pm}$10% and 119ke${\pm}$6% and 100% of scatter window data were subtracted from the photopeak window prior to reconstruction. The projection data were reconstructed using Butterworth filter with cutoff frequency of 0.95cycles/cm and order of 10. Attenuation correction was done by Chang's method with attenuation coefficients of 0.12/cm and 0.15/cm for the reconstruction data without scatter correction and with scatter correction, respectively. For quantitation, regions of interest (ROIs) were drawn on the three slices selected at the level of the basal ganglia. Without scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.2 and 2.1, respectively. However, the ratios between basal ganglia and background were very similar for with and without attenuation correction. With scatter correction, the ratios of ROI average values between basal ganglia and background with attenuation correction and without attenuation correction were 2.69 and 2.64, respectively. These results indicate that the attenuation correction is necessary for the quantitation. When true ratios between basal ganglia and background were 6.58, 4.68, 1.86, the measured ratios with scatter and attenuation correction were 76%, 80%, 82% of their true ratios, respectively. The approximate 20% underestimation could be partially due to the effect of partial volume and reconstruction algorithm which we have not investigated in this study, and partially due to imperfect scatter and attenuation correction methods that we have applied in consideration of clinical applications.

  • PDF

A Comparative Study on the Recognition of Urban Agriculture between Urban Farmers and Public Officials (도시농업인과 공무원의 도시농업 인식 비교·평가)

  • Park, Won-Zei;Koo, Bon-Hak;Park, Mi-Ok;Kwon, Hyo-Jin
    • Journal of the Korean Institute of Landscape Architecture
    • /
    • v.40 no.4
    • /
    • pp.90-103
    • /
    • 2012
  • The aim of this study is to be able to understand the problems within the urban agriculture policy promoted by the Government and local autonomous entity base on the comparison of the consciousness of the urban agriculture between urban farmers and public officials and to inquire into the further revitalization scheme in the end. For this purpose, this study drew implications through studying latest trend and the legislation of domestic and foreign urban agriculture and then conducted a questionnaire survey of urban farmers and public officials. Because of this research, the revitalization schemes of urban agriculture are as follows: First, it's necessary to secure the usable arable land, such as the green roof, community garden, as well as urban agriculture park, etc. Second, it is necessary to establish the urban agriculture relations act suited for the actual circumstances of our country and to back up the legislation at an institutional, technological level in terms of a nation in order to secure the durability of urban agriculture. Third, it is advisable to make a proposal about the problems in time of activities for cultivation by forming a network between urban farmers and public officials and to prepare the plan for the active exchange of farming technologies. Fourth, it's necessary to activate the community gardens by supplying the education through cultivation method & its management method, and a variety of urban-agriculture-participation programs. Fifth, it is necessary to set up the specialized and practical education through an institute for landscape architecture. Sixth, it is necessary to induce the spontaneous participation in urban agriculture from urban farmers accompanied by the activities for promotion that are worth arousing urban farmers' interest. Lastly, it is also necessary to establish a legal basis of urban agricultural parks and facilities as well as to promote a search for multilateral policies and their practice so that the further urban agriculture can be stably continued within city boundaries.

Optimal supervised LSA method using selective feature dimension reduction (선택적 자질 차원 축소를 이용한 최적의 지도적 LSA 방법)

  • Kim, Jung-Ho;Kim, Myung-Kyu;Cha, Myung-Hoon;In, Joo-Ho;Chae, Soo-Hoan
    • Science of Emotion and Sensibility
    • /
    • v.13 no.1
    • /
    • pp.47-60
    • /
    • 2010
  • Most of the researches about classification usually have used kNN(k-Nearest Neighbor), SVM(Support Vector Machine), which are known as learn-based model, and Bayesian classifier, NNA(Neural Network Algorithm), which are known as statistics-based methods. However, there are some limitations of space and time when classifying so many web pages in recent internet. Moreover, most studies of classification are using uni-gram feature representation which is not good to represent real meaning of words. In case of Korean web page classification, there are some problems because of korean words property that the words have multiple meanings(polysemy). For these reasons, LSA(Latent Semantic Analysis) is proposed to classify well in these environment(large data set and words' polysemy). LSA uses SVD(Singular Value Decomposition) which decomposes the original term-document matrix to three different matrices and reduces their dimension. From this SVD's work, it is possible to create new low-level semantic space for representing vectors, which can make classification efficient and analyze latent meaning of words or document(or web pages). Although LSA is good at classification, it has some drawbacks in classification. As SVD reduces dimensions of matrix and creates new semantic space, it doesn't consider which dimensions discriminate vectors well but it does consider which dimensions represent vectors well. It is a reason why LSA doesn't improve performance of classification as expectation. In this paper, we propose new LSA which selects optimal dimensions to discriminate and represent vectors well as minimizing drawbacks and improving performance. This method that we propose shows better and more stable performance than other LSAs' in low-dimension space. In addition, we derive more improvement in classification as creating and selecting features by reducing stopwords and weighting specific values to them statistically.

  • PDF

The Usefulness According to the Incubation Time of PTH as Prediction Index of Hypocalcemia (저칼슘혈증 예측지표로서 부갑상선 호르몬 검사반응시간에 따른 유용성)

  • Au, Doo-Hee;Kim, Ji-Young;Seok, Jae-Dong
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.14 no.1
    • /
    • pp.138-142
    • /
    • 2010
  • Purpose: PTH (parathyroid hormone) level is a useful index for prediction of hypocalcemia after thyroidectomy. The fast results are required for an early diagnosis of hypocalcemia. In this study, we evaluated the PTH change according to incubation time, and investigated the usefulness of hypocalcemia diagnosis of PTH results in early incubation time. Materials and Methods: The subjects were 131 patients who had taken the PTH test from July to August in 2009. All experiments were used IRMA method. PTH value were evaluated with the correlation between precision (10 times repeat) and recovery rate and at 0.5, 3, 6 and $18{\pm}2$ (below overnight) hours following incubation time. Data analysis was investigated with relationship of the sensitivity, specificity, PPV (positive predictive value) and accuracy. Results: The correlation was time-dependent with levels reaching $R^2$=0.987 at 0.5 hours, $R^2$=0.993 at 3 hours and $R^2$=0.996 at 6 hours compare to overnight levels. The precision (%CV${\pm}$SD) were $15.92{\pm}15.54$ at 0.5 hours, $6.91{\pm}7.38$ at 3 hours, $4.30{\pm}4.69$ at 6 hours and $4.59{\pm}2.59$ at overnight. The recovery rate (%Mean${\pm}$SD) were $96.8{\pm}5.44$ at 0.5 hours, $102.6{\pm}4.35$ at 3 hours, $100.7{\pm}2.56$ at 6 hours and $102.2{\pm}5.98$ at overnight. When 15 pg/ml of overnight density was set up as criteria, we measured the sensitivity, specificity and PPV, accuracy at 0.5, 3, 6 hours. The sensitivity was shown to 97.5% at all times. The specificity was 96.0% at 0.5 hours, 100% at 3 hours and 92.3% at 6 hours for control, respectively. The PPV was 86.6% at 0.5 hours, 100% at 3 hours and 92.8% at 6 hours. The accuracy was shown to 84.7% at 0.5 hours, 97.5% at 3 hours and 90.6% at 6 hours. These data were accompanied by a corresponding PTH value of overnight incubation time, which significantly correlated with early time results. Conclusion: The values of PTH at 3 hours has favorable the rate of concordance of 94.1% and may be useful for prediction of hypocalcemia, and it responses to overnight incubation PTH values. Therefore, This method may be an attractive alternative to proper treatment to stop symptom revelation by giving a calcium agent to the patient.

  • PDF

Establishment of Reference Intervals of Osteocalcin according to Age in Women for Health Promotion Center (건강검진이 의뢰된 여성의 연령에 따른 Osteocalcin의 참고범위 설정)

  • Kang, Ji-Soon;Yoo, Byoung-Joo;Oh, Jung-Eun;Kim, Geon-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.13 no.1
    • /
    • pp.104-111
    • /
    • 2009
  • Purpose: Osteocalcin is also known as the bone gamma-carboxyglutamic acid (Gla) protein (BGP), is noncollagenous bone protein synthesized by osteoblasts. Serum concentrations of Osteocalcin have been used as a biochemical marker of bone turnover. The reference intervals of Osteocalcin is categorized by kit corporation according to the age. However, each laboratory should establish its own reference intervals. In this study, the variation in the serum Osteocalcin level were used to find actual standard age-specific Osteocalcin reference intervals. Materials and Methods: We have selected 864 healthy females aged 20~80 years who visited a health promotion center between Aug. 2007 and Sep. 2008. The Osteocalcin IRMA Kit (OSTEO-RIACT, CIS Bio international, Gif-sur-Yvette, France) was used for the quantification. Each results were analyzed with the SPSS 12.0 statistical software. Results: The analyzed reference intervals of Osteocalcin by using Hoffmann method are from 8.8~39.4 ng/mL to 6.3~28.8 ng/mL for the case of the age from 20 to 30, from 7.7~31.9 ng/mL to 5.9~17.4 ng/mL for the case of the age from 31 to 40, and from 8.0~36.0 ng/mL to 5.5~20.1 ng/mL for the case of the age from 41 to 50, and from 8.0~50.5 ng/mL to 6.7~27.0 ng/mL for the case of the age from 51 to 60, and from 12.9~55.9 ng/mL to 7.5~27.5 ng/mL for the case of the age from 61 to 80. Reference intervals of Osteocalcin were not in agreement with those recommended by the manufacturers. Conclusions: Osteocalcin is used as an indication of metabolic bone diseases. So in our study we wanted to provide reference intervals of Osteocalcin that can be useful to a clinical decisions. Also, previous reference intervals should not be re-used and new intervals should be set by continuous analyzing.

  • PDF

Dietary Fiber Intake of Middle School Students in Chungbuk Area and Development of Food Frequency Questionnaire (충북지역 중학생의 식이섬유 섭취 실태 및 식품섭취빈도조사지 개발)

  • Kim, Young-Hye;Kang, Yu-Ju;Lee, In-Seon;Kim, Hyang-Sook
    • Journal of the Korean Society of Food Science and Nutrition
    • /
    • v.39 no.2
    • /
    • pp.244-252
    • /
    • 2010
  • This study aimed to offer groundwork for grasp and evaluation of nutritional status and dietary fiber intake through 24-hr recall method among middle school students in Chungbuk area. In addition, this study attempted to develop food frequency questionnaire (FFQ) for dietary fiber intake. Average calorie intake per person a day was 2035.6 kcal for boys, and 1876.7 kcal for girls which were 75.4% and 93.8% of estimated energy requirement (EER), respectively. Percent estimated average requirements (%EAR) of calcium, iron and folate were the lowest showing 34.3%, 54.2%, 67.5% for boys and 36.6%, 59.2%, 64.4% for girls, respectively. Average dietary fiber intake per day was $17.6\pm5.3$ g for boys and $16.5\pm4.8$ g for girls which indicate 54.8% and 68.8% of adequate intake (AI), respectively. The main food sources of dietary fiber were polished rice and kimchi. The main food source groups were vegetables, cereals and their products were fruits, seaweeds in the order named, indicating 68.44% total dietary fiber intake from vegetables and cereals. From preliminary 39 food items, 19 food items were selected to derive the correlation coefficient of each food item between 24-hr recall and FFQ method. Correlation coefficient was increased from 0.71 to 0.78 with significant level of p<0.01 after adjustment of FFQ from 39 items to 19 items set. Percentage of classifying subjects into the same levels by food frequency questionnaire and 24-hr recall based on joints classification quartile Kappa value was evaluated. Agreement was highest in the second lowest group showing percentage to correspond rose from 90.2% to 92.4% and Kappa value of 0.54 to 0.59. Consequently, FFQ developed in this study would be useful for estimating the groups which show low intake.

Stock Assessment and Management Implications of the Korean aucha perch (Coreoperca herzi) in Freshwater: (2) Estimation of Potential Yield Assessment and Stock of Coreoperca herzi in the Mid-Upper System of the Seomjin River (담수산 어류 꺽지 (Coreoperca herzi)의 자원 평가 및 관리 방안 연구: 섬진강 중.상류 수계에서 꺽지의 자원량 및 잠재생산량 추정 (2))

  • Jang, Sung-Hyun;Ryu, Hui-Seong;Lee, Jung-Ho
    • Korean Journal of Ecology and Environment
    • /
    • v.44 no.2
    • /
    • pp.172-177
    • /
    • 2011
  • The study sought to determine the efficient management of Korean aucha perch by estimating the potential yield (PY), which means the maximum sustainable yield (MSY) based on the optimal stock, in the mid-upper region of the Seomjin River watershed from August 2008 to April 2009. The stock assessment was conducted by the swept area method and PY was estimated by a modified fisheries management system based on the allowable biological catch. Also, the yield-per-recruit analysis (Beverton and Holt, 1957) was used to review the efficient management of resource, Coreoperca herzi. The age at first capture ($t_c$) was 1.464 age and converted body length was 7.8 cm. Concerning current fishing intensities, the instantaneous coefficient of fishing mortality (F) was estimated to be 0.061 $year^{-1}$; yield-per-recruit analysis estimated the current yield per recruit as 4.124 g with F and $t_c$. The fishing mortality of Allowable Biological Catch ($F_{ABC}$) based on the current $t_c$ and F was estimated to be 0.401 $year^{-1}$, therefore, the optimum fishing intensities could be achieved at the higher fishing intensity for Coreoperca herzi. The calculated annual stock of Coreoperca herzi was 3,048 kg, the potential yield was estimated to be 861 kg with $t_c$ and $F_{ABC}$ at the fixed current level. Using yield-per-recruit analysis, if F and $t_c$ were set at 0.643 $year^{-1}$ and 3 age, respectively, the yield per recruit would be predicted to increase 3.4-fold, from 4.12 g to 13.84 g.