• Title/Summary/Keyword: data modelling

Search Result 1,282, Processing Time 0.032 seconds

Segmentation of data measured by laser scanning in reverse engineering (역공학에서 레이저스캔 데이터의 분할)

  • 김호찬;허성민;이석희
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1997.10a
    • /
    • pp.129-132
    • /
    • 1997
  • Laser scanning is widely used due to its fast measuring and high precision, and the segmentation of the scanned data is necessary for the fast and efficient surface modelling. But most segmentation techniques are based on the very regular data and the adaptation of previous techniques to the scanned data does not usually produce good result. A new approach to perform the segmentation on the scanned data is introduced to deal with problems during reverse engineering process. The approach is based on the triangulated data and its result is depending on the some user-defined criteria. The result is illustrated to demonstrate its adaptability to the measured data on free-form surface and the each result by different criteria is compared respectively.

  • PDF

Data Technology: New Interdisciplinary Science & Technology (데이터 기술: 지식창조를 위한 새로운 융합과학기술)

  • Park, Sung-Hyun
    • Journal of Korean Society for Quality Management
    • /
    • v.38 no.3
    • /
    • pp.294-312
    • /
    • 2010
  • Data Technology (DT) is a new technology which deals with data collection, data analysis, information generation from data, knowledge generation from modelling and future prediction. DT is a newly emerged interdisciplinary science & technology in this 21st century knowledge society. Even though the main body of DT is applied statistics, it also contains management information system (MIS), quality management, process system analysis and so on. Therefore, it is an interdisciplinary science and technology of statistics, management science, industrial engineering, computer science and social science. In this paper, first of all, the definition of DT is given, and then the effects and the basic properties of DT, the differences between IT and DT, the 6 step process for DT application, and a DT example are provided. Finally, the relationship among DT, e-Statistics and Data Mining is explained, and the direction of DT development is proposed.

Joint HGLM approach for repeated measures and survival data

  • Ha, Il Do
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.4
    • /
    • pp.1083-1090
    • /
    • 2016
  • In clinical studies, different types of outcomes (e.g. repeated measures data and time-to-event data) for the same subject tend to be observed, and these data can be correlated. For example, a response variable of interest can be measured repeatedly over time on the same subject and at the same time, an event time representing a terminating event is also obtained. Joint modelling using a shared random effect is useful for analyzing these data. Inferences based on marginal likelihood may involve the evaluation of analytically intractable integrations over the random-effect distributions. In this paper we propose a joint HGLM approach for analyzing such outcomes using the HGLM (hierarchical generalized linear model) method based on h-likelihood (i.e. hierarchical likelihood), which avoids these integration itself. The proposed method has been demonstrated using various numerical studies.

A Study on the 3-D Digital Modelling of the Sea Bottom Topography (3차원 해저지형 수치모델에 관한 연구)

  • 양승윤;김정훈;김병준;김경섭
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.5 no.2
    • /
    • pp.50-61
    • /
    • 2002
  • In this study, 3-dimensional virtual visualization was performed for a rapid and accurate analysis of sea bottom topography, The visualization was done through the extracted data using the developed program and the generated data using the gridding method. The data extraction program was developed with AutoLISP programming language and this program was able to extract the needed sample bathymetry data from the electronic sea chart systematically as well as effectively. The gridded bathymetry data were generated by the interpolation or extrapolation method from the spatially-irregular sample data. As the result of realization for the 3-dimensional virtual visualization, it was shown a proper feasibility in the analysis of the sea bottom topography to determine the route of submarine cable burial.

Analysis of Factors Affecting Mode Choice Behavior by Stated Preference(SP) Data in Secondary Cities (SP Data에 의한 지방도시의 교통수단선택 요인분석에 관한 연구)

  • ;山川仁;申運稙
    • Journal of Korean Society of Transportation
    • /
    • v.10 no.3
    • /
    • pp.21-42
    • /
    • 1992
  • As for the travel demand analysis of the past, forcasting has been conducted by the use of revealed preference(RP) informations about actual or observed choices made by individuals. Forcasting method using RP data needs implicit assumptions that there will be no remarkable changes in existing transport conditions. However in case of occuring the great changes in existing conditions or adding a new choice-set of hypothetical options, it is very difficult to predict future travel demand. Fortunately in recent years, especially in the mode choice analysis, it has been perceived that the importance of individual performance data using stated preference(SP) experiments as well as RP data. But the research reports has not been reported sufficiently from models estimated using SP data. Under this background, we analyze the factors affecting the mode choice behavior as a fundamental study against the modelling task with SP choice data. For this analysis, we assumed subway operations in the secondary cities where there are no subway lines until now, and set up a choice-set of hypothetical options based on Experimental Design Method.

  • PDF

Spatio-temporal models for generating a map of high resolution NO2 level

  • Yoon, Sanghoo;Kim, Mingyu
    • Journal of the Korean Data and Information Science Society
    • /
    • v.27 no.3
    • /
    • pp.803-814
    • /
    • 2016
  • Recent times have seen an exponential increase in the amount of spatial data, which is in many cases associated with temporal data. Recent advances in computer technology and computation of hierarchical Bayesian models have enabled to analyze complex spatio-temporal data. Our work aims at modeling data of daily average nitrogen dioxide (NO2) levels obtained from 25 air monitoring sites in Seoul between 2003 and 2010. We considered an independent Gaussian process model and an auto-regressive model and carried out estimation within a hierarchical Bayesian framework with Markov chain Monte Carlo techniques. A Gaussian predictive process approximation has shown the better prediction performance rather than a Hierarchical auto-regressive model for the illustrative NO2 concentration levels at any unmonitored location.

Derivation of TMA Slagging Indices for Blended Coals

  • Park, Ho Young;Baek, Se Hyun;Kim, Hyun Hee;Park, Sang Bin
    • KEPCO Journal on Electric Power and Energy
    • /
    • v.3 no.2
    • /
    • pp.127-131
    • /
    • 2017
  • The present paper describes the slagging field data obtained with the one-dimensional process model for the 500 MW tangentially coal fired boiler in Korea. To obtain slagging field data in terms of thermal resistances [$m^2{\cdot}^{\circ}C/kW$], a number of plant data were collected and analyzed with the one-dimensional modelling software at 500 MW full load. The slagging field data for the primary superheater were obtained for six coal blends, and compared with two TMA (Thermo-Mechanical analyzer) slagging indices and the numerical slagging index, along with the conventional slagging indices which were modified with the ash loading. The advanced two TMA indices for six blended coals give a good slagging tendency when comparing them with the slagging field data, while the modified conventional slagging indices give a relatively poor agreement.

A Survey of Applications of Artificial Intelligence Algorithms in Eco-environmental Modelling

  • Kim, Kang-Suk;Park, Joon-Hong
    • Environmental Engineering Research
    • /
    • v.14 no.2
    • /
    • pp.102-110
    • /
    • 2009
  • Application of artificial intelligence (AI) approaches in eco-environmental modeling has gradually increased for the last decade. Comprehensive understanding and evaluation on the applicability of this approach to eco-environmental modeling are needed. In this study, we reviewed the previous studies that used AI-techniques in eco-environmental modeling. Decision Tree (DT) and Artificial Neural Network (ANN) were found to be major AI algorithms preferred by researchers in ecological and environmental modeling areas. When the effect of the size of training data on model prediction accuracy was explored using the data from the previous studies, the prediction accuracy and the size of training data showed nonlinear correlation, which was best-described by hyperbolic saturation function among the tested nonlinear functions including power and logarithmic functions. The hyperbolic saturation equations were proposed to be used as a guideline for optimizing the size of training data set, which is critically important in designing the field experiments required for training AI-based eco-environmental modeling.

A Study on Benefits of Digital Preservation of Research Data (디지털 연구데이터 장기보존의 편익에 대한 연구)

  • Hyun, Moonsoo
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.11 no.1
    • /
    • pp.161-181
    • /
    • 2011
  • The study identifies benefits arising from the digital preservation of research data. Through analysing studies on digital preservation and digital research data in a economic perspective, broad benefits from digital preservation of research data have been identified and analysed quantitatively as well as qualitatively. It provides the starting point of systematic management and preservation of digital research data. Furthermore, it attempts modelling for investigating and producing benefits both in quantity and quality that is fundamental to promote investments.

Quantifying Values from BIM-projects life cycle with cloud-based computing

  • Choi, Michelle Mang Syn;Kim, Inhan
    • International conference on construction engineering and project management
    • /
    • 2015.10a
    • /
    • pp.271-275
    • /
    • 2015
  • A variety of evaluation application and initiatives on the adoption of Building Information Modelling (BIM) have been introduced in recent years. Most of which however, focused mainly on evaluating design to construction phase-processes, or BIM utilization performances. Through studying existing publications, it is found that continuous utilization of BIM data throughout the building's life cycle is comparatively less explored or documented. Therefore, this study looks at improving this incomplete life cycle condition with the concept that accumulated BIM data should be carried forward and statistically quantified for cross comparison, in order to facilitate practitioners to better improve the projects the future. Based on this conceptual theory of moving towards a closedloop BIM building life cycle, this study explores, through existing literature, the use of cloud based computing as the means to quantify and adaptively utilize BIM data. Categorization of BIM data relations in adaptive utilization of BIM data is then suggested as a initial step for enhancing cross comparison of BIM data in a cloud environment.

  • PDF