• Title/Summary/Keyword: data sets

Search Result 3,771, Processing Time 0.027 seconds

A Comparative Study of Fuzzy Relationship and ANN for Landslide Susceptibility in Pohang Area (퍼지관계 기법과 인공신경망 기법을 이용한 포항지역의 산사태 취약성 예측 기법 비교 연구)

  • Kim, Jin Yeob;Park, Hyuck Jin
    • Economic and Environmental Geology
    • /
    • v.46 no.4
    • /
    • pp.301-312
    • /
    • 2013
  • Landslides are caused by complex interaction among a large number of interrelated factors such as topography, geology, forest and soils. In this study, a comparative study was carried out using fuzzy relationship method and artificial neural network to evaluate landslide susceptibility. For landslide susceptibility mapping, maps of the landslide occurrence locations, slope angle, aspect, curvature, lithology, soil drainage, soil depth, soil texture, forest type, forest age, forest diameter and forest density were constructed from the spatial data sets. In fuzzy relation analysis, the membership values for each category of thematic layers have been determined using the cosine amplitude method. Then the integration of different thematic layers to produce landslide susceptibility map was performed by Cartesian product operation. In artificial neural network analysis, the relative weight values for causative factors were determined by back propagation algorithm. Landslide susceptibility maps prepared by two approaches were validated by ROC(Receiver Operating Characteristic) curve and AUC(Area Under the Curve). Based on the validation results, both approaches show excellent performance to predict the landslide susceptibility but the performance of the artificial neural network was superior in this study area.

Establishment of the Korean Standard Vocal Sound into Character Conversion Rule (한국어 음가를 한글 표기로 변환하는 표준규칙 제정)

  • 이계영;임재걸
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.2
    • /
    • pp.51-64
    • /
    • 2004
  • The purpose of this paper is to establish the Standard Korean Vocal Sound into Character Conversion Rule (Standard VSCC Rule) by reversely applying the Korean Standard Pronunciation Rule that regulates the way of reading written Hangeul sentences. The Standard VSCC Rule performs a crucially important role in Korean speech recognition. The general method of speech recognition is to find the most similar pattern among the standard voice patterns to the input voice pattern. Each of the standard voice patterns is an average of several sample voice patterns. If the unit of the standard voice pattern is a word, then the number of entries of the standard voice pattern will be greater than a few millions (taking inflection and postpositional particles into account). This many entries require a huge database and an impractically too many comparisons in the process of finding the most similar pattern. Therefore, the unit of the standard voice pattern should be a syllable. In this case, we have to resolve the problem of the difference between the Korean vocal sounds and the writing characters. The process of converting a sequence of Korean vocal sounds into a sequence of characters requires our Standard VSCC Rule. Making use of our Standard VSCC Rule, we have implemented a Korean vocal sounds into Hangeul character conversion system. The Korean Standard Pronunciation Rule consists of 30 items. In order to show soundness and completeness of our Standard VSCC Rule, we have tested the conversion system with various data sets reflecting all the 30 items. The test results will be presented in this paper.

Adaptive Hard Decision Aided Fast Decoding Method in Distributed Video Coding (적응적 경판정 출력을 이용한 고속 분산 비디오 복호화 기술)

  • Oh, Ryang-Geun;Shim, Hiuk-Jae;Jeon, Byeung-Woo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.6
    • /
    • pp.66-74
    • /
    • 2010
  • Recently distributed video coding (DVC) is spotlighted for the environment which has restriction in computing resource at encoder. Wyner-Ziv (WZ) coding is a representative scheme of DVC. The WZ encoder independently encodes key frame and WZ frame respectively by conventional intra coding and channel code. WZ decoder generates side information from reconstructed two key frames (t-1, t+1) based on temporal correlation. The side information is regarded as a noisy version of original WZ frame. Virtual channel noise can be removed by channel decoding process. So the performance of WZ coding greatly depends on the performance of channel code. Among existing channel codes, Turbo code and LDPC code have the most powerful error correction capability. These channel codes use stochastically iterative decoding process. However the iterative decoding process is quite time-consuming, so complexity of WZ decoder is considerably increased. Analysis of the complexity of LPDCA with real video data shows that the portion of complexity of LDPCA decoding is higher than 60% in total WZ decoding complexity. Using the HDA (Hard Decision Aided) method proposed in channel code area, channel decoding complexity can be much reduced. But considerable RD performance loss is possible according to different thresholds and its proper value is different for each sequence. In this paper, we propose an adaptive HDA method which sets up a proper threshold according to sequence. The proposed method shows about 62% and 32% of time saving, respectively in LDPCA and WZ decoding process, while RD performance is not that decreased.

Balance-Swap Optimization of Economic Load Dispatch Problem using Quadratic Fuel Cost Function (이차 발전비용함수를 사용한 경제급전문제의 균형-교환 최적화)

  • Lee, Sang-Un
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.243-250
    • /
    • 2014
  • In this paper, I devise a balance-swap optimization (BSO) algorithm to solve economic load dispatch with a quadratic fuel cost function. This algorithm firstly sets initial values to $P_i{\leftarrow}P_i^{max}$, (${\Sigma}P_i^{max}$ > $P_d$) and subsequently entails two major processes: a balance process whereby a generator's power i of $_{max}\{F(P_i)-F(P_i-{\alpha})\}$, ${\alpha}=_{min}(P_i-P_i^{min})$ is balanced by $P_i{\leftarrow}P_i-{\alpha}$ until ${\Sigma}P_i=P_d$; and a swap process whereby $_{max}\{F(P_i)-F(P_i-{\beta})\}$ > $_{min}\{F(P_i+{{\beta})-F(P_j)\}$, $i{\neq}j$, ${\beta}$ = 1.0, 0.1, 0.1, 0.01, 0.001 is set at $P_i{\leftarrow}P_i-{\beta}$, $P_j{\leftarrow}P_j+{\beta}$. When applied to 15, 20, and 38-generators benchmark data, this simple algorithm has proven to consistently yield the best possible results. Moreover, this algorithm has dramatically reduced the costs for a centralized operation of 73-generators - a sum of the three benchmark cases - which could otherwise have been impossible for independent operations.

Analysis of Genetic Relationship of Apple Varieties using Microsatellite Markers (Microsatellite 마커를 이용한 사과 품종 간 유전적 유연관계 분석)

  • Hong, Jee-Hwa;Kwon, Yong-Sham;Choi, Keun-Jin
    • Journal of Life Science
    • /
    • v.23 no.6
    • /
    • pp.721-727
    • /
    • 2013
  • The objective of this study was to evaluate the suitability of microsatellite markers for variety identification in 42 apple varieties. For microsatellite analysis, 305 primer pairs were screened in 8 varieties and twenty six primer pairs showed polymorphism with clear band pattern and repetitive reproducibility. A total of 165 polymorphic amplified fragments were obtained in 42 varieties using 26 markers. Two to twelve alleles were detected for each locus with an average of 6.4 alleles per locus. A value of polymorphism information content (PIC) ranged from 0.461 to 0.849 with an average of 0.665. A total of 165 marker loci were used to calculate Jaccard's distance coefficients using unweighted pair-group method with arithmetical average (UPGMA) cluster analysis. Genetic distance of cluster ranged from 0.27 to 1.00. Analysis of genetic relationship revealed that these 26 microsatellite marker sets discriminated a total of 41 varieties except for 1 variety among 42 varieties. These markers will be utilized as molecular data in variety identification of apple.

R Based Parallelization of a Climate Suitability Model to Predict Suitable Area of Maize in Korea (국내 옥수수 재배적지 예측을 위한 R 기반의 기후적합도 모델 병렬화)

  • Hyun, Shinwoo;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.19 no.3
    • /
    • pp.164-173
    • /
    • 2017
  • Alternative cropping systems would be one of climate change adaptation options. Suitable areas for a crop could be identified using a climate suitability model. The EcoCrop model has been used to assess climate suitability of crops using monthly climate surfaces, e.g., the digital climate map at high spatial resolution. Still, a high-performance computing approach would be needed for assessment of climate suitability to take into account a complex terrain in Korea, which requires considerably large climate data sets. The objectives of this study were to implement a script for R, which is an open source statistics analysis platform, in order to use the EcoCrop model under a parallel computing environment and to assess climate suitability of maize using digital climate maps at high spatial resolution, e.g., 1 km. The total running time reduced as the number of CPU (Central Processing Unit) core increased although the speedup with increasing number of CPU cores was not linear. For example, the wall clock time for assessing climate suitability index at 1 km spatial resolution reduced by 90% with 16 CPU cores. However, it took about 1.5 time to compute climate suitability index compared with a theoretical time for the given number of CPU. Implementation of climate suitability assessment system based on the MPI (Message Passing Interface) would allow support for the digital climate map at ultra-high spatial resolution, e.g., 30m, which would help site-specific design of cropping system for climate change adaptation.

Protein and Amino-acid Contents in Backtae, Seoritae, Huktae, and Seomoktae Soybeans with Different Cooking Methods (콩의 종류 및 조리방법에 따른 단백질·아미노산 함량 변화)

  • Im, Jeong Yeon;Kim, Sang-Cheon;Kim, Sena;Choi, Youngmin;Yang, Mi Ran;Cho, In Hee;Kim, Haeng Ran
    • Korean journal of food and cookery science
    • /
    • v.32 no.5
    • /
    • pp.567-574
    • /
    • 2016
  • Purpose: The objective of this study was to provide nutritional information (protein and amino-acid contents) of soybeans (Baktae, Seoritae, Huktae, and Seomoktae) with different cooking methods. Methods: Raw, boiled (in $100{\pm}15^{\circ}C$ of water for 4 hr), and fried (in a pan at $110{\pm}15^{\circ}C$ for $20{\pm}5min$) soybean samples were prepared. Contents of protein and amino acids were determined. Results: Protein content in raw Baktae, Seoritae, Huktae, and Seomoktae soybeans ranged from 361.0 to 386.8 mg/g. Protein contents differed according to cooking methods. They were higher in pan-fried beans (107.9-113.5%) than in raw or boiled soybeans (48.2-49.5%). A total of 18 amino acids were analyzed. Amino acid data sets were subjected to principle component analysis (PCA) to understand their differences according to soybean types and cooking methods. Bean samples could be distinguished better according to cooking method in comparison with bean types by principle component (PC1) and PC2. In particular, fried soybeans contained much higher levels of cystein. Other amino acids were the dominant in raw and boiled ones. On the other hand, the amounts of threonine, histidine, proline, arginine, tyrosine, lysine, tryptophan, and methionine were higher in raw bean samples than in cooked ones. Conclusion: The contents of amino-acids and proteins are more effected by different cooking methods in comparison with soybean types.

Minimum Number of Observation Points for LEO Satellite Orbit Estimation by OWL Network

  • Park, Maru;Jo, Jung Hyun;Cho, Sungki;Choi, Jin;Kim, Chun-Hwey;Park, Jang-Hyun;Yim, Hong-Suh;Choi, Young-Jun;Moon, Hong-Kyu;Bae, Young-Ho;Park, Sun-Youp;Kim, Ji-Hye;Roh, Dong-Goo;Jang, Hyun-Jung;Park, Young-Sik;Jeong, Min-Ji
    • Journal of Astronomy and Space Sciences
    • /
    • v.32 no.4
    • /
    • pp.357-366
    • /
    • 2015
  • By using the Optical Wide-field Patrol (OWL) network developed by the Korea Astronomy and Space Science Institute (KASI) we generated the right ascension and declination angle data from optical observation of Low Earth Orbit (LEO) satellites. We performed an analysis to verify the optimum number of observations needed per arc for successful estimation of orbit. The currently functioning OWL observatories are located in Daejeon (South Korea), Songino (Mongolia), and Oukaïmeden (Morocco). The Daejeon Observatory is functioning as a test bed. In this study, the observed targets were Gravity Probe B, COSMOS 1455, COSMOS 1726, COSMOS 2428, SEASAT 1, ATV-5, and CryoSat-2 (all in LEO). These satellites were observed from the test bed and the Songino Observatory of the OWL network during 21 nights in 2014 and 2015. After we estimated the orbit from systematically selected sets of observation points (20, 50, 100, and 150) for each pass, we compared the difference between the orbit estimates for each case, and the Two Line Element set (TLE) from the Joint Space Operation Center (JSpOC). Then, we determined the average of the difference and selected the optimal observation points by comparing the average values.

SWAT model calibration/validation using SWAT-CUP I: analysis for uncertainties of objective functions (SWAT-CUP을 이용한 SWAT 모형 검·보정 I: 목적함수에 따른 불확실성 분석)

  • Yu, Jisoo;Noh, Joonwoo;Cho, Younghyun
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.1
    • /
    • pp.45-56
    • /
    • 2020
  • This study aims to quantify the uncertainty that can be induced by the objective function when calibrating SWAT parameters using SWAT-CUP. SWAT model was constructed to estimate runoff in Naesenong-cheon, which is the one of mid-watershed in Nakdong River basin, and then automatic calibration was performed using eight objective functions (R2, bR2, NS, MNS, KGE, PBIAS, RSR, and SSQR). The optimum parameter sets obtained from each objective function showed different ranges, and thus the corresponding hydrologic characteristics of simulated data were also derived differently. This is because each objective function is sensitive to specific hydrologic signatures and evaluates model performance in an unique way. In other words, one objective function might be sensitive to the residual of the extreme value, so that well produce the peak value, whereas ignores the average or low flow residuals. Therefore, the hydrological similarity between the simulated and measured values was evaluated in order to select the optimum objective function. The hydrologic signatures, which include not only the magnitude, but also the ratio of the inclining and declining time in hydrograph, were defined to consider the timing of the flow occurrence, the response of watershed, and the increasing and decreasing trend. The results of evaluation were quantified by scoring method, and hence the optimal objective functions for SWAT parameter calibration were determined as MNS (342.48) and SSQR (346.45) with the highest total scores.

Template-Based Object-Order Volume Rendering with Perspective Projection (원형기반 객체순서의 원근 투영 볼륨 렌더링)

  • Koo, Yun-Mo;Lee, Cheol-Hi;Shin, Yeong-Gil
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.7
    • /
    • pp.619-628
    • /
    • 2000
  • Abstract Perspective views provide a powerful depth cue and thus aid the interpretation of complicated images. The main drawback of current perspective volume rendering is the long execution time. In this paper, we present an efficient perspective volume rendering algorithm based on coherency between rays. Two sets of templates are built for the rays cast from horizontal and vertical scanlines in the intermediate image which is parallel to one of volume faces. Each sample along a ray is calculated by interpolating neighboring voxels with the pre-computed weights in the templates. We also solve the problem of uneven sampling rate due to perspective ray divergence by building more templates for the regions far away from a viewpoint. Since our algorithm operates in object-order, it can avoid redundant access to each voxel and exploit spatial data coherency by using run-length encoded volume. Experimental results show that the use of templates and the object-order processing with run-length encoded volume provide speedups, compared to the other approaches. Additionally, the image quality of our algorithm improves by solving uneven sampling rate due to perspective ray di vergence.

  • PDF