• Title/Summary/Keyword: DATA PRE-PROCESSING

Search Result 808, Processing Time 0.022 seconds

Efficient Data Management for Finite Element Analysis with Pre-Post Processing of Large Structures (전-후 처리 과정을 포함한 거대 구조물의 유한요소 해석을 위한 효율적 데이터 구조)

  • 박시형;박진우;윤태호;김승조
    • Proceedings of the Computational Structural Engineering Institute Conference
    • /
    • 2004.04a
    • /
    • pp.389-395
    • /
    • 2004
  • We consider the interface between the parallel distributed memory multifrontal solver and the finite element method. We give in detail the requirement and the data structure of parallel FEM interface which includes the element data and the node array. The full procedures of solving a large scale structural problem are assumed to have pre-post processors, of which algorithm is not considered in this paper. The main advantage of implementing the parallel FEM interface is shown up in the case that we use a distributed memory system with a large number of processors to solve a very large scale problem. The memory efficiency and the performance effect are examined by analyzing some examples on the Pegasus cluster system.

  • PDF

The Study of Failure Mode Data Development and Feature Parameter's Reliability Verification Using LSTM Algorithm for 2-Stroke Low Speed Engine for Ship's Propulsion (선박 추진용 2행정 저속엔진의 고장모드 데이터 개발 및 LSTM 알고리즘을 활용한 특성인자 신뢰성 검증연구)

  • Jae-Cheul Park;Hyuk-Chan Kwon;Chul-Hwan Kim;Hwa-Sup Jang
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.60 no.2
    • /
    • pp.95-109
    • /
    • 2023
  • In the 4th industrial revolution, changes in the technological paradigm have had a direct impact on the maintenance system of ships. The 2-stroke low speed engine system integrates with the core equipment required for propulsive power. The Condition Based Management (CBM) is defined as a technology that predictive maintenance methods in existing calender-based or running time based maintenance systems by monitoring the condition of machinery and diagnosis/prognosis failures. In this study, we have established a framework for CBM technology development on our own, and are engaged in engineering-based failure analysis, data development and management, data feature analysis and pre-processing, and verified the reliability of failure mode DB using LSTM algorithms. We developed various simulated failure mode scenarios for 2-stroke low speed engine and researched to produce data on onshore basis test_beds. The analysis and pre-processing of normal and abnormal status data acquired through failure mode simulation experiment used various Exploratory Data Analysis (EDA) techniques to feature extract not only data on the performance and efficiency of 2-stroke low speed engine but also key feature data using multivariate statistical analysis. In addition, by developing an LSTM classification algorithm, we tried to verify the reliability of various failure mode data with time-series characteristics.

Pre-arrangement Based Task Scheduling Scheme for Reducing MapReduce Job Processing Time (MapReduce 작업처리시간 단축을 위한 선 정렬 기반 태스크 스케줄링 기법)

  • Park, Jung Hyo;Kim, Jun Sang;Kim, Chang Hyeon;Lee, Won Joo;Jeon, Chang Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.11
    • /
    • pp.23-30
    • /
    • 2013
  • In this paper, we propose pre-arrangement based task scheduling scheme to reduce MapReduce job processing time. If a task and data to be processed do not locate in same node, the data should be transmitted to node where the task is allocated on. In that case, a job processing time increases owing to data transmission time. To avoid that case, we schedule tasks into two steps. In the first step, tasks are sorted in the order of high data locality. In the second step, tasks are exchanged to improve their data localities based on a location information of data. In performance evaluation, we compare the proposed method based Hadoop with a default Hadoop on a small Hadoop cluster in term of the job processing time and the number of tasks sorted to node without data to be processed by them. The result shows that the proposed method lowers job processing time by around 18%. Also, we confirm that the number of tasks allocated to node without data to be processed by them decreases by around 25%.

The Effects of Playing Video Games on Children's Visual Parallel Processing (아동의 전자게임 활동이 시각적 병행처리에 미치는 영향)

  • Kim, Sook Hyun;Choi, Kyoung Sook
    • Korean Journal of Child Studies
    • /
    • v.20 no.3
    • /
    • pp.231-244
    • /
    • 1999
  • This study examined the effects of short and long term playing of video gamer on children's visual parallel processing. All of the 64 fourth grade subjects were above average in IQ. They were classified into high and low video game users. Instruments were a visual parallel processing task consisting of imagery integration items, computers, and the arcade video game, Pac-Man. Subjects were pre-tested with a visual parallel processing task. After one week, the experimental group played video games for 15 minutes, but the control group didn't play. Immediately following this, all children were post-tested by the same task used on the pretest. The data was analyzed by ANCOVA and repeated measures ANOVA. The results showed that relaying short-term video games improved visual parallel processing and that long term experience with video games also affected visual parallel processing. there were no differences between high and low users in visual parallel processing after playing short term video games.

  • PDF

Pre-aggregation Index Method Based on the Spatial Hierarchy in the Spatial Data Warehouse (공간 데이터 웨어하우스에서 공간 데이터의 개념계층기반 사전집계 색인 기법)

  • Jeon, Byung-Yun;Lee, Dong-Wook;You, Byeong-Seob;Kim, Gyoung-Bae;Bae, Hae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.11
    • /
    • pp.1421-1434
    • /
    • 2006
  • Spatial data warehouses provide analytical information for decision supports using SOLAP (Spatial On-Line Analytical Processing) operations. Many researches have been studied to reduce analysis cost of SOLAP operations using pre-aggregation methods. These methods use the index composed of fixed size nodes for supporting the concept hierarchy. Therefore, these methods have many unused entries in sparse data area. Also, it is impossible to support the concept hierarchy in dense data area. In this paper, we propose a dynamic pre-aggregation index method based on the spatial hierarchy. The proposed method uses the level of the index for supporting the concept hierarchy. In sparse data area, if sibling nodes have a few used entries, those entries are integrated in a node and the parent entries share the node. In dense data area, if a node has many objects, the node is connected with linked list of several nodes and data is stored in linked nodes. Therefore, the proposed method saves the space of unused entries by integrating nodes. Moreover it can support the concept hierarchy because a node is not divided by linked nodes. Experimental result shows that the proposed method saves both space and aggregation search cost with the similar building cost of other methods.

  • PDF

Visualization of 4-Dimensional Scattered Data Linear Interpolation Based on Data Dependent Tetrahedrization (4차원 산포된 자료 선형 보간의 가시화 -자료 값을 고려한 사면체 분할법에 의한-)

  • Lee, Kun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1553-1567
    • /
    • 1996
  • The numerous applications surface interpolation include the modeling and visualization phenomena. A tetrahedrization is one of pre-processing steps for 4-D space interpolation. The quality of a piecewise linear interpolation 4-D space depends not only on the distribution of the data points in $R^2$, but also on the data values. We show that the quality of approximation can be improved by data dependent tetraheadrization through visualization of 4-D space. This paper discusses Delaunary tetrahedrization method(sphere criterion) and one of the data dependent tetrahedrization methods(least squares fitting criterion). This paper also discusses new data dependent criteria:1) gradient difference, and 2) jump in normal direction derivative.

  • PDF

Developing A Pre-and Post-Procellor for Building Analysis (건축구조해석을 위한 선후처리 프로그램의 개발)

  • 이정재
    • Magazine of the Korean Society of Agricultural Engineers
    • /
    • v.36 no.2
    • /
    • pp.31-43
    • /
    • 1994
  • General concepts and overall procedures of interactive graphical user interface, a preand post- processor, for building analysis are introduced. Attention is forcused on the data structures and the modeling operators which can ensure the intergrity of its database should have. An example of model building process is presented to illustrate its capability, its facilities for modifying, and for processing.

  • PDF

EEG Signal Characteristic Analysis for Monitoring of Anesthesia Depth Using Bicoherence Analysis Method (바이코히어런스 분석 기법을 이용한 마취 단계별 뇌파의 특성 분석)

  • Park Jun-Mo;Park Jong-Duk;Jeon Gye-Rok;Huh Young
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.1
    • /
    • pp.35-41
    • /
    • 2006
  • Although reachers have studied for a long time, they don't make criteria for anesthesia depth. anesthetists can't make a prediction about patient's reaction. Therefor, patients have potential risk such as poisonous side effect late-awake, early-awake and strain reaction. EEG are received from twenty-five patients who agreed to investigate themselves during operation with Enflurane-anesthesis in progress of anesthesia. EEG are divided pre-anesthesia, before incision of skin, operation 1, operation 2, awaking, post-anesthesia by anesthesia progress step. EEG is applied pre-processing, base line correct, linear detrend to get more reliable data. EEG data are handled by electronic processing and the EEG data are calculated by bicoherence. During pre-anesthesia and post anesthesia, appearance rate of bicoherence value is observed strong appearance rate in high frequency range($15\~30Hz$). During the anesthesia of patient, a strong appearance rate is revealed the low frequency area(0~10Hz). After bicoherence is calculated by percentage of a appearance rate, that is, Bicpara$\#$1, Bicpara$\#$2, Bicpara$\#$3 and Bicpara$\#$4 parameter are extracted. In result of bicoherence analysis, Bicpara$\#$2 and Bicpara#4 are considered that the best parameter showed progress of anesthesia effectively. And each separated bicoherence are calculated by average bicoherence's numerical value, divide by 2 area, appear by each BicHz$\#$1, BicHz$\#$2, and observed BicHz$\#$1/BicHz$\#$2's change. In result of bicoherence analysis, BicHz$\#$1, BicHz$\#$2 and BicHz$\#$1/BicHz$\#$2 are considered that the best parameter showed progress of anesthesia effectively. In conclusion, I confirmed the anesthesia progress phase, concluded to usefulness of parameter on bispectrum and bicoherence analysis and evaluated the depth of anesthesia. In the future, it is going to use for doctor's diagnosis and apply to protect an medical accident owing to anesthesia.

Bit-level Array Structure Representation of Weight and Optimization Method to Design Pre-Trained Neural Network (학습된 신경망 설계를 위한 가중치의 비트-레벨 어레이 구조 표현과 최적화 방법)

  • Lim, Guk-Chan;Kwak, Woo-Young;Lee, Hyun-Soo
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.39 no.9
    • /
    • pp.37-44
    • /
    • 2002
  • This paper proposes efficient digital hardware design method by using fixed weight of pre-trained neural network. For this, arithmetic operations of PEs(Processing Elements) are represented with matrix-vector multiplication. The relationship of fixed weight and input data present bit-level array structure architecture which is consisted operation node. To minimize the operation node, this paper proposes node elimination method and setting common node depend on bit pattern of weight. The result of FPGA simulation shows the efficiency on hardware cost and operation speed with full precision. And proposed design method makes possibility that many PEs are implemented to on-chip.

ENHANCEMENT AND SMOOTHING OF HYPERSPECTAL REMOTE SENSING DATA BY ADVANCED SCALE-SPACE FILTERING

  • Konstantinos, Karantzalos;Demetre, Argialas
    • Proceedings of the KSRS Conference
    • /
    • v.2
    • /
    • pp.736-739
    • /
    • 2006
  • While hyperspectral data are very rich in information, their processing poses several challenges such as computational requirements, noise removal and relevant information extraction. In this paper, the application of advanced scale-space filtering to selected hyperspectral bands was investigated. In particular, a pre-processing tool, consisting of anisotropic diffusion and morphological leveling filtering, has been developed, aiming to an edge-preserving smoothing and simplification of hyperspectral data, procedures which are of fundamental importance during feature extraction and object detection. Two scale space parameters define the extent of image smoothing (anisotropic diffusion iterations) and image simplification (scale of morphological levelings). Experimental results demonstrated the effectiveness of the developed scale space filtering for the enhancement and smoothing of hyperspectral remote sensing data and their advantage against watershed over-segmentation problems and edge detection.

  • PDF