• Title/Summary/Keyword: K 평균 알고리즘

Search Result 1,295, Processing Time 0.053 seconds

Development of a Biophysical Rice Yield Model Using All-weather Climate Data (MODIS 전천후 기상자료 기반의 생물리학적 벼 수량 모형 개발)

  • Lee, Jihye;Seo, Bumsuk;Kang, Sinkyu
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.721-732
    • /
    • 2017
  • With the increasing socio-economic importance of rice as a global staple food, several models have been developed for rice yield estimation by combining remote sensing data with carbon cycle modelling. In this study, we aimed to estimate rice yield in Korea using such an integrative model using satellite remote sensing data in combination with a biophysical crop growth model. Specifically, daily meteorological inputs derived from MODIS (Moderate Resolution imaging Spectroradiometer) and radar satellite products were used to run a light use efficiency based crop growth model, which is based on the MODIS gross primary production (GPP) algorithm. The modelled biomass was converted to rice yield using a harvest index model. We estimated rice yield from 2003 to 2014 at the county level and evaluated the modelled yield using the official rice yield and rice straw biomass statistics of Statistics Korea (KOSTAT). The estimated rice biomass, yield, and harvest index and their spatial distributions were investigated. Annual mean rice yield at the national level showed a good agreement with the yield statistics with the yield statistics, a mean error (ME) of +0.56% and a mean absolute error (MAE) of 5.73%. The estimated county level yield resulted in small ME (+0.10~+2.00%) and MAE (2.10~11.62%),respectively. Compared to the county-level yield statistics, the rice yield was over estimated in the counties in Gangwon province and under estimated in the urban and coastal counties in the south of Chungcheong province. Compared to the rice straw statistics, the estimated rice biomass showed similar error patterns with the yield estimates. The subpixel heterogeneity of the 1 km MODIS FPAR(Fraction of absorbed Photosynthetically Active Radiation) may have attributed to these errors. In addition, the growth and harvest index models can be further developed to take account of annually varying growth conditions and growth timings.

Development of an Artificial Neural Expert System for Rational Determination of Lateral Earth Pressure Coefficient (합리적인 측압계수 결정을 위한 인공신경 전문가 시스템의 개발)

  • 문상호;문현구
    • Journal of the Korean Geotechnical Society
    • /
    • v.15 no.1
    • /
    • pp.99-112
    • /
    • 1999
  • By using 92 values of lateral earth pressure coefficient(K) measured in Korea, the tendency of K with varying depth is analyzed and compared with the range of K defined by Hoek and Brown. The horizontal stress is generally larger than the vertical stress in Korea : About 84 % of K values are above 1. In this study, the theory of elasto-plasticity is applied to analyze the variation of K values, and the results are compared with those of numerical analysis. This reveals that the erosion, sedimentation and weathering of earth crust are important factors in the determination of K values. Surface erosion, large lateral pressure and good rock mass increase the K values, but sedimentation decreases the K values. This study enable us to analyze the effects of geological processes on the K values, especially at shallow depth where underground excavation takes place. A neural network expert system using multi-layer back-propagation algorithm is developed to predict the K values. The neural network model has a correlation coefficient above 0.996 when it is compared with measured data. The comparison with 9 measured data which are not included in the back-propagation learning has shown an average inference error of 20% and the correlation coefficient above 0.95. The expert system developed in this study can be used for reliable determination of K values.

  • PDF

Evaluation of Clinical Availability for Shoulder Forced Traction Method to Minimize the Beam Hardening Artifact in Cervical-spine Computed Tomography (CT) (경추부 전산화단층촬영에서 선속 경화 인공물을 최소화하기 위한 견부 강제 견인법에 대한 임상적 유용성 평가)

  • Kim, Moonjeung;Cho, Wonjin;Kang, Suyeon;Lee, Wonseok;Park, Jinwoo;Yu, Yunsik;Im, Inchul;Lee, Jaeseung;Kim, Hyeonjin;Kwak, Byungjoon
    • Journal of the Korean Society of Radiology
    • /
    • v.7 no.1
    • /
    • pp.37-44
    • /
    • 2013
  • In study suggested clinical availability to shoulder forced traction method in term of quality of image, the patient's convenience and stability, according to whether to use of shoulder forced traction bend using computed tomography(CT) that X-ray calibration and various mathematic calibration algorithm application can be applied by AEC. To achieve this, 79 patients is complaining of cervical pain oriented that shoulder forced traction bend use the before and after acquires lateral projection scout image and transverse image. transverse image of a fixed size in concern field of pixel and figure the average HU value compare that quantitative analysis. Artifact and pixel and resolution to qualitative clinical estimation image analysis. the patient feel inconvenience degree that self-diagnosis survey that estimate. As a result, lateral projection scout image if you used shoulder forced traction bend for the depicted has been an increase in the number of a cervical vertebrae. transverse image concern field shoulder forced traction bend use the before and after for pixel and the average HU-value changes was judged to be almost irrelevant. Artifact and resolution and contrast, in qualitative analysis of the results relating the observer to the unusual result. So, the patients of 82.27% complained discomfort that use of shoulder forced traction bend in self-diagnosis survey. No merit of medical image by using of bend from result was analyzed quality of image to quantitative and qualitative method judged. Nowadays, CT is supplied possible revision of quality of radiation by reduction of slice and automatic exposure controller, etc and application of preconditioning filter process due to various mathematic revision algorithm. So, image noise by beam hardening artifact should not be a problem. shoulder forced traction bend of use no longer judged clinically availability because have not influence of image quality and give discomfort, have extra dangerousness.

A Proposal of Remaining Useful Life Prediction Model for Turbofan Engine based on k-Nearest Neighbor (k-NN을 활용한 터보팬 엔진의 잔여 유효 수명 예측 모델 제안)

  • Kim, Jung-Tae;Seo, Yang-Woo;Lee, Seung-Sang;Kim, So-Jung;Kim, Yong-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.4
    • /
    • pp.611-620
    • /
    • 2021
  • The maintenance industry is mainly progressing based on condition-based maintenance after corrective maintenance and preventive maintenance. In condition-based maintenance, maintenance is performed at the optimum time based on the condition of equipment. In order to find the optimal maintenance point, it is important to accurately understand the condition of the equipment, especially the remaining useful life. Thus, using simulation data (C-MAPSS), a prediction model is proposed to predict the remaining useful life of a turbofan engine. For the modeling process, a C-MAPSS dataset was preprocessed, transformed, and predicted. Data pre-processing was performed through piecewise RUL, moving average filters, and standardization. The remaining useful life was predicted using principal component analysis and the k-NN method. In order to derive the optimal performance, the number of principal components and the number of neighbor data for the k-NN method were determined through 5-fold cross validation. The validity of the prediction results was analyzed through a scoring function while considering the usefulness of prior prediction and the incompatibility of post prediction. In addition, the usefulness of the RUL prediction model was proven through comparison with the prediction performance of other neural network-based algorithms.

Comparative Study of Automatic Trading and Buy-and-Hold in the S&P 500 Index Using a Volatility Breakout Strategy (변동성 돌파 전략을 사용한 S&P 500 지수의 자동 거래와 매수 및 보유 비교 연구)

  • Sunghyuck Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.6
    • /
    • pp.57-62
    • /
    • 2023
  • This research is a comparative analysis of the U.S. S&P 500 index using the volatility breakout strategy against the Buy and Hold approach. The volatility breakout strategy is a trading method that exploits price movements after periods of relative market stability or concentration. Specifically, it is observed that large price movements tend to occur more frequently after periods of low volatility. When a stock moves within a narrow price range for a while and then suddenly rises or falls, it is expected to continue moving in that direction. To capitalize on these movements, traders adopt the volatility breakout strategy. The 'k' value is used as a multiplier applied to a measure of recent market volatility. One method of measuring volatility is the Average True Range (ATR), which represents the difference between the highest and lowest prices of recent trading days. The 'k' value plays a crucial role for traders in setting their trade threshold. This study calculated the 'k' value at a general level and compared its returns with the Buy and Hold strategy, finding that algorithmic trading using the volatility breakout strategy achieved slightly higher returns. In the future, we plan to present simulation results for maximizing returns by determining the optimal 'k' value for automated trading of the S&P 500 index using artificial intelligence deep learning techniques.

An Early Termination Algorithm for Efficient CU Splitting in HEVC (HEVC 고속 부호화를 위한 효율적인 CU 분할 조기 결정 알고리즘)

  • Goswami, Kalyan;Kim, Byung-Gyu;Jun, DongSan;Jung, SoonHeung;Seok, JinWook;Kim, YounHee;Choi, Jin Soo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.271-282
    • /
    • 2013
  • Recently, ITU-T/VCEG and ISO/IEC MPEG have started a new joint standardization activity on video coding, called High Efficiency Video Coding (HEVC). This new standard gives significant improvement in terms of picture quality for high resolution video. The main challenge in this upcoming standard is the time complexity. In this paper we have focused on CU splitting algorithm. We have proposed a novel algorithm which can terminate the CU splitting process early based on the RD cost of the parent and current level and the motion vector value of the current CU. Experimental result shows that our proposed algorithm gives on average more than about 10% decrement in time over ECU [8] with on average 1.78% of BD loss on the original.

Speed-up Techniques for High-Resolution Grid Data Processing in the Early Warning System for Agrometeorological Disaster (농업기상재해 조기경보시스템에서의 고해상도 격자형 자료의 처리 속도 향상 기법)

  • Park, J.H.;Shin, Y.S.;Kim, S.K.;Kang, W.S.;Han, Y.K.;Kim, J.H.;Kim, D.J.;Kim, S.O.;Shim, K.M.;Park, E.W.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.153-163
    • /
    • 2017
  • The objective of this study is to enhance the model's speed of estimating weather variables (e.g., minimum/maximum temperature, sunshine hour, PRISM (Parameter-elevation Regression on Independent Slopes Model) based precipitation), which are applied to the Agrometeorological Early Warning System (http://www.agmet.kr). The current process of weather estimation is operated on high-performance multi-core CPUs that have 8 physical cores and 16 logical threads. Nonetheless, the server is not even dedicated to the handling of a single county, indicating that very high overhead is involved in calculating the 10 counties of the Seomjin River Basin. In order to reduce such overhead, several cache and parallelization techniques were used to measure the performance and to check the applicability. Results are as follows: (1) for simple calculations such as Growing Degree Days accumulation, the time required for Input and Output (I/O) is significantly greater than that for calculation, suggesting the need of a technique which reduces disk I/O bottlenecks; (2) when there are many I/O, it is advantageous to distribute them on several servers. However, each server must have a cache for input data so that it does not compete for the same resource; and (3) GPU-based parallel processing method is most suitable for models such as PRISM with large computation loads.

Robust Image Fusion Using Stationary Wavelet Transform (정상 웨이블렛 변환을 이용한 로버스트 영상 융합)

  • Kim, Hee-Hoon;Kang, Seung-Hyo;Park, Jea-Hyun;Ha, Hyun-Ho;Lim, Jin-Soo;Lim, Dong-Hoon
    • The Korean Journal of Applied Statistics
    • /
    • v.24 no.6
    • /
    • pp.1181-1196
    • /
    • 2011
  • Image fusion is the process of combining information from two or more source images of a scene into a single composite image with application to many fields, such as remote sensing, computer vision, robotics, medical imaging and defense. The most common wavelet-based fusion is discrete wavelet transform fusion in which the high frequency sub-bands and low frequency sub-bands are combined on activity measures of local windows such standard deviation and mean, respectively. However, discrete wavelet transform is not translation-invariant and it often yields block artifacts in a fused image. In this paper, we propose a robust image fusion based on the stationary wavelet transform to overcome the drawback of discrete wavelet transform. We use the activity measure of interquartile range as the robust estimator of variance in high frequency sub-bands and combine the low frequency sub-band based on the interquartile range information present in the high frequency sub-bands. We evaluate our proposed method quantitatively and qualitatively for image fusion, and compare it to some existing fusion methods. Experimental results indicate that the proposed method is more effective and can provide satisfactory fusion results.

Feasibility Study for Development of Transit Dosimetry Based Patient Dose Verification System Using the Glass Dosimeter (유리선량계를 이용한 투과선량 기반 환자선량 평가 시스템 개발을 위한 가능성 연구)

  • Jeong, Seonghoon;Yoon, Myonggeun;Kim, Dong Wook;Chung, Weon Kuu;Chung, Mijoo;Choi, Sang Hyoun
    • Progress in Medical Physics
    • /
    • v.26 no.4
    • /
    • pp.241-249
    • /
    • 2015
  • As radiation therapy is one of three major cancer treatment methods, many cancer patients get radiation therapy. To exposure as much radiation to cancer while normal tissues near tumor get little radiation, medical physicists make a radiotherapy plan treatment and perform quality assurance before patient treatment. Despite these efforts, unintended medical accidents can occur by some errors. In order to solve the problem, patient internal dose reconstruction methods by measuring transit dose are suggested. As feasibility study for development of patient dose verification system, inverse square law, percentage depth dose and scatter factor are used to calculate dose in the water-equivalent homogeneous phantom. As a calibration results of ionization chamber and glass dosimeter to transit radiation, signals of glass dosimeter are 0.824 times at 6 MV and 0.736 times at 10 MV compared to dose measured by ionization chamber. Average scatter factor is 1.4 and Mayneord F factor was used to apply percentage depth dose data. When we verified the algorithm using the water-equivalent homogeneous phantom, maximum error was 1.65%.

Investigation of SO2 effect on OMI-TOMS and OMI-DOAS O3 in volcanic areas with OMI satellite data (OMI 위성자료를 이용한 화산지역 고농도 이산화황 환경에서의 TOMS 오존과 DOAS 오존의 비교연구)

  • Choi, Wonei;Hong, Hyunkee;Park, Junsung;Kim, Daewon;Yeo, Jaeho;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.31 no.6
    • /
    • pp.599-608
    • /
    • 2015
  • In this present study, we quantified the $SO_2$ effect on $O_3$ retrieval from the Ozone Monitoring Instrument (OMI) measurement. The difference between OMI-Total Ozone Mapping Spectrometer (TOMS) and OMI-Differential Optical Absorption Spectrometer (DOAS) total $O_3$ is calculated in high $SO_2$ volcanic plume on several volcanic eruptions (Anatahan, La Cumbre, Sierra Negra, and Piton) from 2005 through 2008. There is a certain correlation ($R{\geq}0.5$) between the difference and $OMI-SO_2$ in volcanic plumes and the significant difference close to 100 DU. The high $SO_2$ condition found to affect TOMS $O_3$ retrieval significantly due to a strong $SO_2$ absorption at the TOMS $O_3$ retrieval wavelengths. Besides, we calculated the difference against various $SO_2$ levels. There is the considerable difference (average = 32.9 DU; standard deviation = 13.5 DU) in the high $OMI-SO_2$ condition ($OMI-SO_2{\geq}7.0DU$). We also found that the rate of change in the difference per 1.0 DU change in middle troposphere (TRM) and upper troposphere and stratosphere (STL) $SO_2$ columns are 3.9 DU and 4.9 DU, respectively.