• 제목/요약/키워드: data for analysis

검색결과 72,703건 처리시간 0.086초

Neo-Chinese Style Furniture Design Based on Semantic Analysis and Connection

  • Ye, Jialei;Zhang, Jiahao;Gao, Liqian;Zhou, Yang;Liu, Ziyang;Han, Jianguo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권8호
    • /
    • pp.2704-2719
    • /
    • 2022
  • Lately, neo-Chinese style furniture has been frequently noticed by product design professionals for the big part it played in promoting traditional Chinese culture. This article is an attempt to use big data semantic analysis method to provide effective design research method for neo-Chinese furniture design. By using big data mining program TEXTOM for big data collection and analysis, the data obtained from typical websites in a set time period will be sorted and analyzed. On the basis of "neo-Chinese furniture" samples, key data will be compared, classification analysis of overall data, and horizontal analysis of typical data will be performed by the methods of word frequency analysis, connection centrality analysis, and TF-IDF analysis. And we tried to summarize according to the related views and theories of the design. The research results show that the results of data analysis are close to the relevant definitions of design. The core high-frequency vocabulary obtained under data analysis, such as popular, furniture, modern, etc., can provide a reasonable and effective focus of attention for the designs. The result obtained through the systematic sorting and summary of the data can be a reliable guidance in the direction of our design. This research attempted to introduce related big data mining semantic analysis methods into the product design industry, to supply scientific and objective data and channels for studies on design, and to provide a case on the practical application of big data analysis in the industry.

공학교육 정책제안을 위한 빅데이터 분석 시스템 사례 분석 연구 (A Case Study on Big Data Analysis Systems for Policy Proposals of Engineering Education)

  • 김재희;유미나
    • 공학교육연구
    • /
    • 제22권5호
    • /
    • pp.37-48
    • /
    • 2019
  • The government has tried to develop a platform for systematically collecting and managing engineering education data for policy proposals. However, there have been few cases of big data analysis platform for policy proposals in engineering education, and it is difficult to determine the major function of the platform, the purpose of using big data, and the method of data collection. This study aims to collect the cases of big data analysis systems for the development of a big data system for educational policy proposals, and to conduct a study to analyze cases using the analysis frame of key elements to consider in developing a big data analysis platform. In order to analyze the case of big data system for engineering education policy proposals, 24 systems collecting and managing big data were selected. The analysis framework was developed based on literature reviews and the results of the case analysis were presented. The results of this study are expected to provide from macro-level such as what functions the platform should perform in developing a big data system and how to collect data, what analysis techniques should be adopted, and how to visualize the data analysis results.

연속해석 데이터의 상호운용성을 지원하는 CAE 미들웨어와 가시화 시스템의 개발 (Development of a CAE Middleware and a Visualization System for Supporting Interoperability of Continuous CAE Analysis Data)

  • 송인호;양정삼;조현제;최상수
    • 한국CDE학회논문집
    • /
    • 제15권2호
    • /
    • pp.85-93
    • /
    • 2010
  • This paper proposes a CAE data translation and visualization technique that can verify time-varying continuous analysis simulation in a virtual reality (VR) environment. In previous research, the use of CAE analysis data has been problematic because of the lack of any interactive simulation controls for visualizing continuous simulation data. Moreover, the research on post-processing methods for real-time verification of CAE analysis data has not been sufficient. We therefore propose a scene graph based visualization method and a post-processing method for supporting interoperability of continuous CAE analysis data. These methods can continuously visualize static analysis data independently of any timeline; it can also continuously visualize dynamic analysis data that varies in relation to the timeline. The visualization system for continuous simulation data, which includes a CAE middleware that interfaces with various formats of CAE analysis data as well as functions for visualizing continuous simulation data and operational functions, enables users to verify simulation results with more realistic scenes. We also use the system to do a performance evaluation with regard to the visualization of continuous simulation data.

데이터간 의미 분석을 위한 R기반의 데이터 가중치 및 신경망기반의 데이터 예측 모형에 관한 연구 (A Novel Data Prediction Model using Data Weights and Neural Network based on R for Meaning Analysis between Data)

  • 정세훈;김종찬;심춘보
    • 한국멀티미디어학회논문지
    • /
    • 제18권4호
    • /
    • pp.524-532
    • /
    • 2015
  • All data created in BigData times is included potentially meaning and correlation in data. A variety of data during a day in all society sectors has become created and stored. Research areas in analysis and grasp meaning between data is proceeding briskly. Especially, accuracy of meaning prediction and data imbalance problem between data for analysis is part in course of something important in data analysis field. In this paper, we proposed data prediction model based on data weights and neural network using R for meaning analysis between data. Proposed data prediction model is composed of classification model and analysis model. Classification model is working as weights application of normal distribution and optimum independent variable selection of multiple regression analysis. Analysis model role is increased prediction accuracy of output variable through neural network. Performance evaluation result, we were confirmed superiority of prediction model so that performance of result prediction through primitive data was measured 87.475% by proposed data prediction model.

Development of Realtime GRID Analysis Method based on the High Precision Streaming Data

  • Lee, HyeonSoo;Suh, YongCheol
    • 한국측량학회지
    • /
    • 제34권6호
    • /
    • pp.569-578
    • /
    • 2016
  • With the recent advancement of surveying and technology, the spatial data acquisition rates and precision have been improved continually. As the updates of spatial data are rapid, and the size of data increases in line with the advancing technology, the LOD (Level of Detail) algorithm has been adopted to process data expressions in real time in a streaming format with spatial data divided precisely into separate steps. The existing GRID analysis utilizes the single DEM, as it is, in examining and analyzing all data outside the analysis area as well, which results in extending the analysis time in proportion to the quantity of data. Hence, this study suggests a method to reduce analysis time and data throughput by acquiring and analyzing DEM data necessary for GRID analysis in real time based on the area of analysis and the level of precision, specifically for streaming DEM data, which is utilized mostly for 3D geographic information service.

OLAP를 이용한 설계변경 분석 방법에 관한 연구 (A Method for Engineering Change Analysis by Using OLAP)

  • 도남철
    • 한국CDE학회논문집
    • /
    • 제19권2호
    • /
    • pp.103-110
    • /
    • 2014
  • Engineering changes are indispensable engineering and management activities for manufactures to develop competitive products and to maintain consistency of its product data. Analysis of engineering changes provides a core functionality to support decision makings for engineering change management. This study aims to develop a method for analysis of engineering changes based on On-Line Analytical Processing (OLAP), a proven database analysis technology that has been applied to various business areas. This approach automates data processing for engineering change analysis from product databases that follow an international standard for product data management (PDM), and enables analysts to analyze various aspects of engineering changes with its OLAP operations. The study consists of modeling a standard PDM database and a multidimensional data model for engineering change analysis, implementing the standard and multidimensional models with PDM and data cube systems and applying the implemented data cube to core functions of engineering change management, the evaluation and propagation of engineering changes.

데이터 정제와 그래프 분석을 이용한 대용량 공정데이터 분석 방법 (An Analysis Method of Superlarge Manufacturing Process Data Using Data Cleaning and Graphical Analysis)

  • 박재홍;변재현
    • 품질경영학회지
    • /
    • 제30권2호
    • /
    • pp.72-85
    • /
    • 2002
  • Advances in computer and sensor technology have made it possible to obtain superlarge manufacturing process data in real time, letting us extract meaningful information from these superlarge data sets. We propose a systematic data analysis procedure which field engineers can apply easily to manufacture quality products. The procedure consists of data cleaning and data analysis stages. Data cleaning stage is to construct a database suitable for statistical analysis from the original superlarge manufacturing process data. In the data analysis stage, we suggest a graphical easy-to-implement approach to extract practical information from the cleaned database. This study will help manufacturing companies to achieve six sigma quality.

Big Data Smoothing and Outlier Removal for Patent Big Data Analysis

  • Choi, JunHyeog;Jun, Sunghae
    • 한국컴퓨터정보학회논문지
    • /
    • 제21권8호
    • /
    • pp.77-84
    • /
    • 2016
  • In general statistical analysis, we need to make a normal assumption. If this assumption is not satisfied, we cannot expect a good result of statistical data analysis. Most of statistical methods processing the outlier and noise also need to the assumption. But the assumption is not satisfied in big data because of its large volume and heterogeneity. So we propose a methodology based on box-plot and data smoothing for controling outlier and noise in big data analysis. The proposed methodology is not dependent upon the normal assumption. In addition, we select patent documents as target domain of big data because patent big data analysis is a important issue in management of technology. We analyze patent documents using big data learning methods for technology analysis. The collected patent data from patent databases on the world are preprocessed and analyzed by text mining and statistics. But the most researches about patent big data analysis did not consider the outlier and noise problem. This problem decreases the accuracy of prediction and increases the variance of parameter estimation. In this paper, we check the existence of the outlier and noise in patent big data. To know whether the outlier is or not in the patent big data, we use box-plot and smoothing visualization. We use the patent documents related to three dimensional printing technology to illustrate how the proposed methodology can be used for finding the existence of noise in the searched patent big data.

초기 데이터 분석 로드맵을 적용한 사례 연구 (The Study on Application of Data Gathering for the site and Statistical analysis process)

  • 최은향;이상복
    • 한국품질경영학회:학술대회논문집
    • /
    • 한국품질경영학회 2010년도 춘계학술대회
    • /
    • pp.226-234
    • /
    • 2010
  • In this thesis, we present process that remove mistake of data before statistical analysis. If field data which is not simple examination about validity of data, we cannot believe analyzed statistics information. As statistical analysis information is produced based on data to be input in statistical analysis process, the data to be input should be free of error. In this paper, we study the application of statistical analysis road map that can enhance application on site by organizing basic theory and approaching on initial data exploratory phase, essential step before conducting statistical analysis. Therefore, access to statistical analysis can be enhanced and reliability on result of analysis can be secured by conducting correct statistical analysis.

  • PDF

풍력발전기의 하중 측정을 위한 해석 소프트웨어의 개발 (Development of an Analysis Software for the Load Measurement of Wind Turbines)

  • 길계환;방제성;정진화
    • 풍력에너지저널
    • /
    • 제4권1호
    • /
    • pp.20-29
    • /
    • 2013
  • Load measurement, which is performed based on IEC 61400-13, consists of three stages: the stage of collecting huge amounts of load measurement data through a measurement campaign lasting for several months; the stage of processing the measured data, including data validation and classification; and the stage of analyzing the processed data through time series analysis, load statistics analysis, frequency analysis, load spectrum analysis, and equivalent load analysis. In this research, we pursued the development of an analysis software in MATLAB to save labor and to secure exact and consistent performance evaluation data in processing and analyzing load measurement data. The completed analysis software also includes the functions of processing and analyzing power performance measurement data in accordance with IEC 61400-12. The analysis software was effectively applied to process and analyse the load measurement data from a demonstration research for a 750 kW direct-drive wind turbine generator system (KBP-750D), performed at the Daegwanryeong Wind Turbine Demonstration Complex. This paper describes the details of the analysis software and its processing and analysis stages for load measurement data and presents the analysis results.