• Title/Summary/Keyword: gaussian model

Search Result 1,410, Processing Time 0.029 seconds

Development of Prediction Growth and Yield Models by Growing Degree Days in Hot Pepper (생육도일온도에 따른 고추의 생육 및 수량 예측 모델 개발)

  • Kim, Sung Kyeom;Lee, Jin Hyoung;Lee, Hee Ju;Lee, Sang Gyu;Mun, Boheum;An, Sewoong;Lee, Hee Su
    • Journal of Bio-Environment Control
    • /
    • v.27 no.4
    • /
    • pp.424-430
    • /
    • 2018
  • This study was carried out to estimate growth characteristics of hot pepper and to develop predicted models for the production yield based on the growth parameters and climatic elements. Sigmoid regressions for the prediction of growth parameters in terms of fresh and dry weight, plant height, and leaf area were designed with growing degree days (GDD). The biomass and leaf expansion of hot pepper plants were rapidly increased when 1,000 and 941 GDD. The relative growth rate (RGR) of hot pepper based on dry weight was formulated by Gaussian's equation RGR $(dry\;weight)=0.0562+0.0004{\times}DAT-0.00000557{\times}DAT^2$ and the yields of fresh and dry hot pepper at the 112 days after transplanting were estimated 1,387 and 291 kg/10a, respectively. Results indicated that the growth and yield of hot pepper were predicted by potential growth model under plastic tunnel cultivation. Thus, those models need to calibration and validation to estimate the efficacy of prediction yield in hot pepper using supplement a predicting model, which was based on the parameters and climatic elements.

The Study on Speaker Change Verification Using SNR based weighted KL distance (SNR 기반 가중 KL 거리를 활용한 화자 변화 검증에 관한 연구)

  • Cho, Joon-Beom;Lee, Ji-eun;Lee, Kyong-Rok
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.159-166
    • /
    • 2017
  • In this paper, we have experimented to improve the verification performance of speaker change detection on broadcast news. It is to enhance the input noisy speech and to apply the KL distance $D_s$ using the SNR-based weighting function $w_m$. The basic experimental system is the verification system of speaker change using GMM-UBM based KL distance D(Experiment 0). Experiment 1 applies the input noisy speech enhancement using MMSE Log-STSA. Experiment 2 applies the new KL distance $D_s$ to the system of Experiment 1. Experiments were conducted under the condition of 0% MDR in order to prevent missing information of speaker change. The FAR of Experiment 0 was 71.5%. The FAR of Experiment 1 was 67.3%, which was 4.2% higher than that of Experiment 0. The FAR of experiment 2 was 60.7%, which was 10.8% higher than that of experiment 0.

Future Korean Water Resources Projection Considering Uncertainty of GCMs and Hydrological Models (GCM과 수문모형의 불확실성을 고려한 기후변화에 따른 한반도 미래 수자원 전망)

  • Bae, Deg-Hyo;Jung, Il-Won;Lee, Byung-Ju;Lee, Moon-Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.44 no.5
    • /
    • pp.389-406
    • /
    • 2011
  • The objective of this study is to examine the climate change impact assessment on Korean water resources considering the uncertainties of Global Climate Models (GCMs) and hydrological models. The 3 different emission scenarios (A2, A1B, B1) and 13 GCMs' results are used to consider the uncertainties of the emission scenario and GCM, while PRMS, SWAT, and SLURP models are employed to consider the effects of hydrological model structures and potential evapotranspiration (PET) computation methods. The 312 ensemble results are provided to 109 mid-size sub-basins over South Korean and Gaussian kernel density functions obtained from their ensemble results are suggested with the ensemble mean and their variabilities of the results. It shows that the summer and winter runoffs are expected to be increased and spring runoff to be decreased for the future 3 periods relative to past 30-year reference period. It also provides that annual average runoff increased over all sub-basins, but the increases in the northern basins including Han River basin are greater than those in the southern basins. Due to the reason that the increase in annual average runoff is mainly caused by the increase in summer runoff and consequently the seasonal runoff variations according to climate change would be severe, the climate change impact on Korean water resources could intensify the difficulties to water resources conservation and management. On the other hand, as regards to the uncertainties, the highest and lowest ones are in winter and summer seasons, respectively.

A Study on Condition Analysis of Revised Project Level of Gravity Port facility using Big Data (빅데이터 분석을 통한 중력식 항만시설 수정프로젝트 레벨의 상태변화 특성 분석)

  • Na, Yong Hyoun;Park, Mi Yeon;Jang, Shinwoo
    • Journal of the Society of Disaster Information
    • /
    • v.17 no.2
    • /
    • pp.254-265
    • /
    • 2021
  • Purpose: Inspection and diagnosis on the performance and safety through domestic port facilities have been conducted for over 20 years. However, the long-term development strategies and directions for facility renewal and performance improvement using the diagnosis history and results are not working in realistically. In particular, in the case of port structures with a long service life, there are many problems in terms of safety and functionality due to increasing of the large-sized ships, of port use frequency, and the effects of natural disasters due to climate change. Method: In this study, the maintenance history data of the gravity type quay in element level were collected, defined as big data, and a predictive approximation model was derived to estimate the pattern of deterioration and aging of the facility of project level based on the data. In particular, we compared and proposed models suitable for the use of big data by examining the validity of the state-based deterioration pattern and deterioration approximation model generated through machine learning algorithms of GP and SGP techniques. Result: As a result of reviewing the suitability of the proposed technique, it was considered that the RMSE and R2 in GP technique were 0.9854 and 0.0721, and the SGP technique was 0.7246 and 0.2518. Conclusion: This research through machine learning techniques is expected to play an important role in decision-making on investment in port facilities in the future if port facility data collection is continuously performed in the future.

A Study on Optimization of Perovskite Solar Cell Light Absorption Layer Thin Film Based on Machine Learning (머신러닝 기반 페로브스카이트 태양전지 광흡수층 박막 최적화를 위한 연구)

  • Ha, Jae-jun;Lee, Jun-hyuk;Oh, Ju-young;Lee, Dong-geun
    • The Journal of the Korea Contents Association
    • /
    • v.22 no.7
    • /
    • pp.55-62
    • /
    • 2022
  • The perovskite solar cell is an active part of research in renewable energy fields such as solar energy, wind, hydroelectric power, marine energy, bioenergy, and hydrogen energy to replace fossil fuels such as oil, coal, and natural gas, which will gradually disappear as power demand increases due to the increase in use of the Internet of Things and Virtual environments due to the 4th industrial revolution. The perovskite solar cell is a solar cell device using an organic-inorganic hybrid material having a perovskite structure, and has advantages of replacing existing silicon solar cells with high efficiency, low cost solutions, and low temperature processes. In order to optimize the light absorption layer thin film predicted by the existing empirical method, reliability must be verified through device characteristics evaluation. However, since it costs a lot to evaluate the characteristics of the light-absorbing layer thin film device, the number of tests is limited. In order to solve this problem, the development and applicability of a clear and valid model using machine learning or artificial intelligence model as an auxiliary means for optimizing the light absorption layer thin film are considered infinite. In this study, to estimate the light absorption layer thin-film optimization of perovskite solar cells, the regression models of the support vector machine's linear kernel, R.B.F kernel, polynomial kernel, and sigmoid kernel were compared to verify the accuracy difference for each kernel function.

Birth Weight Distribution by Gestational Age in Korean Population : Using Finite Mixture Modle (우리나라 신생아의 재태 연령에 따른 출생체중의 정상치 : Finite Mixture Model을 이용하여)

  • Lee, Jung-Ju;Park, Chang Gi;Lee, Kwang-Sun
    • Clinical and Experimental Pediatrics
    • /
    • v.48 no.11
    • /
    • pp.1179-1186
    • /
    • 2005
  • Purpose : A universal standard of the birth weight for gestational age cannot be made since girth weight distribution varies with race and other sociodemographic factors. This report aims to establish the birth weight distribution curve by gestational age, specific for Korean live births. Methods : We used the national birth certificate data of all live births in Korea from January 2001 to December 2003; for live births with gestational ages 24 weeks to 44 weeks(n=1,509,763), we obtained mean birth weigh, standard deviation and 10th, 25th, 50th, 75th and 90th percentile values for each gestational age group by one week increment. Then, we investigated the birth weight distribution of each gestational age group by the normal Gaussian model. To establish final standard values of Korean birth weight distribution by gestational age, we used the finite mixture model to eliminate erroneous birth slights for respective gestational ages. Results : For gestational ages 28 weeks 32 weeks, birth weight distribution showed a biologically implausible skewed tail or bimodal distribution. Following correction of the erroneous distribution by using the finite mixture model, the constructed curve of birth weight distribution was compared to those of other studies. The Korean birth weight percentile values were generally lower than those for Norwegians and North Americans, particularly after 37 weeks of gestation. The Korean curve was similar to that of Lubchenco both 50th and 90th percentiles, but generally the Korean curve had higher 10th percentile values. Conclusion : This birth weight distribution curve by gestational age is based on the most recent and the national population data compared to previous studies in Korea. We hope that for Korean infants, this curve will help clinicians in defining and managing the large for gestational age infants and also for infants with intrauterine growth retardation.

Analysis of Trading Performance on Intelligent Trading System for Directional Trading (방향성매매를 위한 지능형 매매시스템의 투자성과분석)

  • Choi, Heung-Sik;Kim, Sun-Woong;Park, Sung-Cheol
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.187-201
    • /
    • 2011
  • KOSPI200 index is the Korean stock price index consisting of actively traded 200 stocks in the Korean stock market. Its base value of 100 was set on January 3, 1990. The Korea Exchange (KRX) developed derivatives markets on the KOSPI200 index. KOSPI200 index futures market, introduced in 1996, has become one of the most actively traded indexes markets in the world. Traders can make profit by entering a long position on the KOSPI200 index futures contract if the KOSPI200 index will rise in the future. Likewise, they can make profit by entering a short position if the KOSPI200 index will decline in the future. Basically, KOSPI200 index futures trading is a short-term zero-sum game and therefore most futures traders are using technical indicators. Advanced traders make stable profits by using system trading technique, also known as algorithm trading. Algorithm trading uses computer programs for receiving real-time stock market data, analyzing stock price movements with various technical indicators and automatically entering trading orders such as timing, price or quantity of the order without any human intervention. Recent studies have shown the usefulness of artificial intelligent systems in forecasting stock prices or investment risk. KOSPI200 index data is numerical time-series data which is a sequence of data points measured at successive uniform time intervals such as minute, day, week or month. KOSPI200 index futures traders use technical analysis to find out some patterns on the time-series chart. Although there are many technical indicators, their results indicate the market states among bull, bear and flat. Most strategies based on technical analysis are divided into trend following strategy and non-trend following strategy. Both strategies decide the market states based on the patterns of the KOSPI200 index time-series data. This goes well with Markov model (MM). Everybody knows that the next price is upper or lower than the last price or similar to the last price, and knows that the next price is influenced by the last price. However, nobody knows the exact status of the next price whether it goes up or down or flat. So, hidden Markov model (HMM) is better fitted than MM. HMM is divided into discrete HMM (DHMM) and continuous HMM (CHMM). The only difference between DHMM and CHMM is in their representation of state probabilities. DHMM uses discrete probability density function and CHMM uses continuous probability density function such as Gaussian Mixture Model. KOSPI200 index values are real number and these follow a continuous probability density function, so CHMM is proper than DHMM for the KOSPI200 index. In this paper, we present an artificial intelligent trading system based on CHMM for the KOSPI200 index futures system traders. Traders have experienced on technical trading for the KOSPI200 index futures market ever since the introduction of the KOSPI200 index futures market. They have applied many strategies to make profit in trading the KOSPI200 index futures. Some strategies are based on technical indicators such as moving averages or stochastics, and others are based on candlestick patterns such as three outside up, three outside down, harami or doji star. We show a trading system of moving average cross strategy based on CHMM, and we compare it to a traditional algorithmic trading system. We set the parameter values of moving averages at common values used by market practitioners. Empirical results are presented to compare the simulation performance with the traditional algorithmic trading system using long-term daily KOSPI200 index data of more than 20 years. Our suggested trading system shows higher trading performance than naive system trading.

The Prediction of DEA based Efficiency Rating for Venture Business Using Multi-class SVM (다분류 SVM을 이용한 DEA기반 벤처기업 효율성등급 예측모형)

  • Park, Ji-Young;Hong, Tae-Ho
    • Asia pacific journal of information systems
    • /
    • v.19 no.2
    • /
    • pp.139-155
    • /
    • 2009
  • For the last few decades, many studies have tried to explore and unveil venture companies' success factors and unique features in order to identify the sources of such companies' competitive advantages over their rivals. Such venture companies have shown tendency to give high returns for investors generally making the best use of information technology. For this reason, many venture companies are keen on attracting avid investors' attention. Investors generally make their investment decisions by carefully examining the evaluation criteria of the alternatives. To them, credit rating information provided by international rating agencies, such as Standard and Poor's, Moody's and Fitch is crucial source as to such pivotal concerns as companies stability, growth, and risk status. But these types of information are generated only for the companies issuing corporate bonds, not venture companies. Therefore, this study proposes a method for evaluating venture businesses by presenting our recent empirical results using financial data of Korean venture companies listed on KOSDAQ in Korea exchange. In addition, this paper used multi-class SVM for the prediction of DEA-based efficiency rating for venture businesses, which was derived from our proposed method. Our approach sheds light on ways to locate efficient companies generating high level of profits. Above all, in determining effective ways to evaluate a venture firm's efficiency, it is important to understand the major contributing factors of such efficiency. Therefore, this paper is constructed on the basis of following two ideas to classify which companies are more efficient venture companies: i) making DEA based multi-class rating for sample companies and ii) developing multi-class SVM-based efficiency prediction model for classifying all companies. First, the Data Envelopment Analysis(DEA) is a non-parametric multiple input-output efficiency technique that measures the relative efficiency of decision making units(DMUs) using a linear programming based model. It is non-parametric because it requires no assumption on the shape or parameters of the underlying production function. DEA has been already widely applied for evaluating the relative efficiency of DMUs. Recently, a number of DEA based studies have evaluated the efficiency of various types of companies, such as internet companies and venture companies. It has been also applied to corporate credit ratings. In this study we utilized DEA for sorting venture companies by efficiency based ratings. The Support Vector Machine(SVM), on the other hand, is a popular technique for solving data classification problems. In this paper, we employed SVM to classify the efficiency ratings in IT venture companies according to the results of DEA. The SVM method was first developed by Vapnik (1995). As one of many machine learning techniques, SVM is based on a statistical theory. Thus far, the method has shown good performances especially in generalizing capacity in classification tasks, resulting in numerous applications in many areas of business, SVM is basically the algorithm that finds the maximum margin hyperplane, which is the maximum separation between classes. According to this method, support vectors are the closest to the maximum margin hyperplane. If it is impossible to classify, we can use the kernel function. In the case of nonlinear class boundaries, we can transform the inputs into a high-dimensional feature space, This is the original input space and is mapped into a high-dimensional dot-product space. Many studies applied SVM to the prediction of bankruptcy, the forecast a financial time series, and the problem of estimating credit rating, In this study we employed SVM for developing data mining-based efficiency prediction model. We used the Gaussian radial function as a kernel function of SVM. In multi-class SVM, we adopted one-against-one approach between binary classification method and two all-together methods, proposed by Weston and Watkins(1999) and Crammer and Singer(2000), respectively. In this research, we used corporate information of 154 companies listed on KOSDAQ market in Korea exchange. We obtained companies' financial information of 2005 from the KIS(Korea Information Service, Inc.). Using this data, we made multi-class rating with DEA efficiency and built multi-class prediction model based data mining. Among three manners of multi-classification, the hit ratio of the Weston and Watkins method is the best in the test data set. In multi classification problems as efficiency ratings of venture business, it is very useful for investors to know the class with errors, one class difference, when it is difficult to find out the accurate class in the actual market. So we presented accuracy results within 1-class errors, and the Weston and Watkins method showed 85.7% accuracy in our test samples. We conclude that the DEA based multi-class approach in venture business generates more information than the binary classification problem, notwithstanding its efficiency level. We believe this model can help investors in decision making as it provides a reliably tool to evaluate venture companies in the financial domain. For the future research, we perceive the need to enhance such areas as the variable selection process, the parameter selection of kernel function, the generalization, and the sample size of multi-class.

Estimation of Uranium Particle Concentration in the Korean Peninsula Caused by North Korea's Uranium Enrichment Facility (북한 우라늄 농축시설로 인한 한반도에서의 공기중 우라늄 입자 농도 예측)

  • Kwak, Sung-Woo;Kang, Han-Byeol;Shin, Jung-Ki;Lee, Junghyun
    • Journal of Radiation Protection and Research
    • /
    • v.39 no.3
    • /
    • pp.127-133
    • /
    • 2014
  • North Korea's uranium enrichment facility is a matter of international concern. It is of particular alarming to South Korea with regard to the security and safety of the country. This situation requires continuous monitoring of the DPRK and emergency preparedness on the part of the ROK. To assess the detectability of an undeclared uranium enrichment plant in North Korea, uranium concentrations in the air at both a short and a long distance from the enrichment facility were estimated. $UF_6$ source terms were determined by using existing information on North Korean facility and data from the operation experience of enrichment plants from other countries. Using the calculated source terms, two atmospheric dispersion models (Gaussian Plume Model and HYSPLIT models) and meteorological data were used to estimate the uranium particle concentrations from the Yongbyon enrichment facility. A maximum uranium concentration and its location are dependent upon the meteorological conditions and the height of the UF6 release point. This study showed that the maximum uranium concentration around the enrichment facility was about $1.0{\times}10^{-7}g{\cdot}m^{-3}$. The location of the maximum concentration was within about 0.4 km of the facility. It has been assumed that the uranium sample of about a few micrograms (${\mu}g$) could be obtained; and that few micrograms of uranium can be easily measured with current measurement instruments. On the contrary, a uranium concentration at a distance of more than 100 kilometers from the enrichment facility was estimated to be about $1.0{\times}10^{-13}{\sim}1.0{\times}10^{-15}g{\cdot}m^{-3}$, which is less than back-ground level. Therefore, based on the results of our paper, an air sample taken within the vicinity of the Yongbyon enrichment facility could be used to determine as to whether or not North Korea is carrying out an undeclared nuclear program. However, the air samples taken at a longer distance of a few hundred kilometers would prove difficult in detecting a clandestine nuclear activities.

Quantitative Conductivity Estimation Error due to Statistical Noise in Complex $B_1{^+}$ Map (정량적 도전율측정의 오차와 $B_1{^+}$ map의 노이즈에 관한 분석)

  • Shin, Jaewook;Lee, Joonsung;Kim, Min-Oh;Choi, Narae;Seo, Jin Keun;Kim, Dong-Hyun
    • Investigative Magnetic Resonance Imaging
    • /
    • v.18 no.4
    • /
    • pp.303-313
    • /
    • 2014
  • Purpose : In-vivo conductivity reconstruction using transmit field ($B_1{^+}$) information of MRI was proposed. We assessed the accuracy of conductivity reconstruction in the presence of statistical noise in complex $B_1{^+}$ map and provided a parametric model of the conductivity-to-noise ratio value. Materials and Methods: The $B_1{^+}$ distribution was simulated for a cylindrical phantom model. By adding complex Gaussian noise to the simulated $B_1{^+}$ map, quantitative conductivity estimation error was evaluated. The quantitative evaluation process was repeated over several different parameters such as Larmor frequency, object radius and SNR of $B_1{^+}$ map. A parametric model for the conductivity-to-noise ratio was developed according to these various parameters. Results: According to the simulation results, conductivity estimation is more sensitive to statistical noise in $B_1{^+}$ phase than to noise in $B_1{^+}$ magnitude. The conductivity estimate of the object of interest does not depend on the external object surrounding it. The conductivity-to-noise ratio is proportional to the signal-to-noise ratio of the $B_1{^+}$ map, Larmor frequency, the conductivity value itself and the number of averaged pixels. To estimate accurate conductivity value of the targeted tissue, SNR of $B_1{^+}$ map and adequate filtering size have to be taken into account for conductivity reconstruction process. In addition, the simulation result was verified at 3T conventional MRI scanner. Conclusion: Through all these relationships, quantitative conductivity estimation error due to statistical noise in $B_1{^+}$ map is modeled. By using this model, further issues regarding filtering and reconstruction algorithms can be investigated for MREPT.