• Title/Summary/Keyword: Input Data

Search Result 8,337, Processing Time 0.037 seconds

Surface Deformation by using 3D Target Curve for Virtual Spatial Design (가상 공간 디자인을 위한 3차원 목표곡선을 이용한 곡면 변형)

  • Kwon, Jung-Hoon;Lee, Jeong-In;Chai, Young-Ho
    • Journal of KIISE:Software and Applications
    • /
    • v.33 no.10
    • /
    • pp.868-876
    • /
    • 2006
  • 2D input data have to be converted into 3D data by means of some functions and menu system in 2D input modeling system. But data in 3D input system for virtual spatial design can be directly connected to the 3D modeling data. Nevertheless, efficient surface modeling and deformation algorithm for the 3D input modeling system are not proposed yet. In this paper, problems of conventional NURBS surface deformation methods which can occur when applied in the 3D input modeling system are introduced. And NURBS surface deformation by 3D target curves, in which the designer can easily approach, are suggested. Designer can efficiently implement the virtual spatial sketching and design by using the proposed deformation algorithm.

Machine Learning of GCM Atmospheric Variables for Spatial Downscaling of Precipitation Data

  • Sunmin Kim;Masaharu Shibata;YasutoTachikawa
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.26-26
    • /
    • 2023
  • General circulation models (GCMs) are widely used in hydrological prediction, however their coarse grids make them unsuitable for regional analysis, therefore a downscaling method is required to utilize them in hydrological assessment. As one of the downscaling methods, convolutional neural network (CNN)-based downscaling has been proposed in recent years. The aim of this study is to generate the process of dynamic downscaling using CNNs by applying GCM output as input and RCM output as label data output. Prediction accuracy is compared between different input datasets, and model structures. Several input datasets with key atmospheric variables such as precipitation, temperature, and humidity were tested with two different formats; one is two-dimensional data and the other one is three-dimensional data. And in the model structure, the hyperparameters were tested to check the effect on model accuracy. The results of the experiments on the input dataset showed that the accuracy was higher for the input dataset without precipitation than with precipitation. The results of the experiments on the model structure showed that substantially increasing the number of convolutions resulted in higher accuracy, however increasing the size of the receptive field did not necessarily lead to higher accuracy. Though further investigation is required for the application, this paper can contribute to the development of efficient downscaling method with CNNs.

  • PDF

Efficiency Analysis of Chinese Blockchain Concept Stock Listed Companies

  • Yan, Hai-Shui;Kim, Hyung-Ho;Yang, Jun-Won
    • International journal of advanced smart convergence
    • /
    • v.9 no.3
    • /
    • pp.17-27
    • /
    • 2020
  • With the continuous development and application of Internet technology, in recent years, new technologies such as cloud computing, big data, the Internet of Things, and AI are becoming more and more familiar to the general public. The development of a digital society has entered a new period of development. In this paper, we used on the 2018 annual data of 50 listed companies with blockchain concept stocks in China. Using data envelopment analysis (DEA) to study and analyze the input-output efficiency, it can be concluded that the input-output efficiency of 50 listed companies is very different. Inefficient companies are as high as 62%. Most companies have a large room for improvement in input-output efficiency due to uneconomical scale or inefficient technology. In order to better improve the company's input-output efficiency, one must improve the efficiency of resource utilization, optimize the company's research and development costs and the input and management of technical personnel; the second is to increase technological innovation and business innovation.

Development of an Input File Preparation Tool for Offline Coupling of DNDC and DSSAT Models (DNDC 지역별 구동을 위한 입력자료 생성 도구 개발)

  • Hyun, Shinwoo;Hwang, Woosung;You, Heejin;Kim, Kwang Soo
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.23 no.1
    • /
    • pp.68-81
    • /
    • 2021
  • The agricultural ecosystem is one of the major sources of greenhouse gas (GHG) emissions. In order to search for climate change adaptation options which mitigate GHG emissions while maintaining crop yield, it is advantageous to integrate multiple models at a high spatial resolution. The objective of this study was to develop a tool to support integrated assessment of climate change impact b y coupling the DSSAT model and the DNDC model. DNDC Regional Input File Tool(DRIFT) was developed to prepare input data for the regional mode of DNDC model using input data and output data of the DSSAT model. In a case study, GHG emissions under the climate change conditions were simulated using the input data prepared b y the DRIFT. The time to prepare the input data was increased b y increasing the number of grid points. Most of the process took a relatively short time, while it took most of the time to convert the daily flood depth data of the DSSAT model to the flood period of the DNDC model. Still, processing a large amount of data would require a long time, which could be reduced by parallelizing some calculation processes. Expanding the DRIFT to other models would help reduce the time required to prepare input data for the models.

Reduction of Input Pins in VLSI Array for High Speed Fractal Image Compression (고속 프랙탈 영상압축을 위한 VLSI 어레이의 입력핀의 감소)

  • 성길영;전상현;이수진;우종호
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.26 no.12A
    • /
    • pp.2059-2066
    • /
    • 2001
  • In this paper, we proposed a method to reduce the number of input pins in one-dimensional VLSI array for fractal image compression. We use quad-tree partition scheme and can reduce the number of the input pins up to 50% by sharing the domain\`s and the range\`s data input pins in the proposed VLSI array architecture. Also, we can reduce the input pins and simplify the internal operation circuit of the processing elements by eliminating a few number of bits of the least significant bits of the input data. We simulated using the 256$\times$256 and 512$\times$512 Lena images to verify performance of the proposed method. As the result of simulation, we can decompress the original image with about 32dB(PSNR) in spite of elimination of the least significant 2-bit in the original input data, and additionally reduce the number of input pins up to 25% compared to VLSI array sharing input pins of range and domain.

  • PDF

Opponent Move Prediction of a Real-time Strategy Game Using a Multi-label Classification Based on Machine Learning (기계학습 기반 다중 레이블 분류를 이용한 실시간 전략 게임에서의 상대 행동 예측)

  • Shin, Seung-Soo;Cho, Dong-Hee;Kim, Yong-Hyuk
    • Journal of the Korea Convergence Society
    • /
    • v.11 no.10
    • /
    • pp.45-51
    • /
    • 2020
  • Recently, many games provide data related to the users' game play, and there have been a few studies that predict opponent move by combining machine learning methods. This study predicts opponent move using match data of a real-time strategy game named ClashRoyale and a multi-label classification based on machine learning. In the initial experiment, binary card properties, binary card coordinates, and normalized time information are input, and card type and card coordinates are predicted using random forest and multi-layer perceptron. Subsequently, experiments were conducted sequentially using the next three data preprocessing methods. First, some property information of the input data were transformed. Next, input data were converted to nested form considering the consecutive card input system. Finally, input data were predicted by dividing into the early and the latter according to the normalized time information. As a result, the best preprocessing step was shown about 2.6% improvement in card type and about 1.8% improvement in card coordinates when nested data divided into the early.

GIS Application Model for Temporal and Spatial Simulation of Surface Runoff from a small watershed (소유역 지표유출의 시간적 . 공간적 재현을 위한 GIS응용모형)

  • 정하우;김성준;최진용;김대식
    • Spatial Information Research
    • /
    • v.3 no.2
    • /
    • pp.135-146
    • /
    • 1995
  • The purpose of this study is to develop a GIS application and interface model (GISCELWAB) for the temporal and spatial simulation of surface runoff from a small watershed. The model was constituted by three sub - models : The input data extraction model (GISINDATA) which prepares cell-based input data automatically for a given watershed, the cell water balance model(CELWAB) which calculates the water balance for a cell and simulates surface runoff of watershed simultaneously by the interaction of cells, and the output data management model(GISOUTDISP) which visualize the results of temporal and spatial variation of surface runoff. The input data extraction model was developed to solve the time-consuming problems for the input-data preparation of distributed hydrologic model. The input data for CELWAB can be obtained by extracting ASCII data from a vector map. The output data management model was developed to convert the storage depth and discharge of cell into grid map. This model ean-bles to visualize the temporal and spatial formulation process of watershed storage depth and surface runoff wholly with time increment.

  • PDF

Development of Power Demand Forecasting Algorithm Using GMDH (GMDH를 이용한 전력 수요 예측 알고리즘 개발)

  • Lee, Dong-Chul;Hong, Yeon-Chan
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.13 no.3
    • /
    • pp.360-365
    • /
    • 2003
  • In this paper, GMDH(Croup Method of Data Handling) algorithm which is proved to be more excellent in efficiency and accuracy of practical use of data is applied to electric power demand forecasting. As a result, it became much easier to make a choice of input data and make an exact prediction based on a lot of data. Also, we considered both economy factors(GDP, export, import, number of employee, number of economically active population and consumption of oil) and climate factors(average temperature) when forecasting. We assumed target forecast period from first quarter 1999 to first quarter 2001, and suggested more accurate forecasting method of electric power demand by using 3-step computer simulation processes(first process for selecting optimum input period, second for analyzing time relation of input data and forecast value, and third for optimizing input data) for improvement of forecast precision. The proposed method can get 0.96 percent of mean error rate at target forecast period.

Design of Echo Classifier Based on Neuro-Fuzzy Algorithm Using Meteorological Radar Data (기상레이더를 이용한 뉴로-퍼지 알고리즘 기반 에코 분류기 설계)

  • Oh, Sung-Kwun;Ko, Jun-Hyun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.5
    • /
    • pp.676-682
    • /
    • 2014
  • In this paper, precipitation echo(PRE) and non-precipitaion echo(N-PRE)(including ground echo and clear echo) through weather radar data are identified with the aid of neuro-fuzzy algorithm. The accuracy of the radar information is lowered because meteorological radar data is mixed with the PRE and N-PRE. So this problem is resolved by using RBFNN and judgement module. Structure expression of weather radar data are analyzed in order to classify PRE and N-PRE. Input variables such as Standard deviation of reflectivity(SDZ), Vertical gradient of reflectivity(VGZ), Spin change(SPN), Frequency(FR), cumulation reflectivity during 1 hour(1hDZ), and cumulation reflectivity during 2 hour(2hDZ) are made by using weather radar data and then each characteristic of input variable is analyzed. Input data is built up from the selected input variables among these input variables, which have a critical effect on the classification between PRE and N-PRE. Echo judgment module is developed to do echo classification between PRE and N-PRE by using testing dataset. Polynomial-based radial basis function neural networks(RBFNNs) are used as neuro-fuzzy algorithm, and the proposed neuro-fuzzy echo pattern classifier is designed by combining RBFNN with echo judgement module. Finally, the results of the proposed classifier are compared with both CZ and DZ, as well as QC data, and analyzed from the view point of output performance.

A practical application of cluster analysis using SPSS

  • Kim, Dae-Hak
    • Journal of the Korean Data and Information Science Society
    • /
    • v.20 no.6
    • /
    • pp.1207-1212
    • /
    • 2009
  • Basic objective in cluster analysis is to discover natural groupings of items or variables. In general, clustering is conducted based on some similarity (or dissimilarity) matrix or the original input text data. Various measures of similarities (or dissimilarities) between objects (or variables) are developed. We introduce a real application problem of clustering procedure in SPSS when the distance matrix of the objects (or variables) is only given as an input data. It will be very helpful for the cluster analysis of huge data set which leads the size of the proximity matrix greater than 1000, particularly. Syntax command for matrix input data in SPSS for clustering is given with numerical examples.

  • PDF