• Title/Summary/Keyword: 분해기법

Search Result 850, Processing Time 0.026 seconds

A Development on the Fault Prognosis of Bearing with Empirical Mode Decomposition and Artificial Neural Network (경험적 모드 분해법과 인공 신경 회로망을 적용한 베어링 상태 분류 기법)

  • Park, Byeonghui;Lee, Changwoo
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.33 no.12
    • /
    • pp.985-992
    • /
    • 2016
  • Bearings have various uses in industrial equipment. The lifetime of bearings is often lesser than anticipated at the time of purchase, due to environmental wear, processing, and machining errors. Bearing conditions are important, since defects and damage can lead to significant issues in production processes. In this study, we developed a method to diagnose faults in the bearing conditions. The faults were determined using kurtosis, average, and standard deviation. An intrinsic mode function for the data from the selected axis was extracted using empirical mode decomposition. The intrinsic mode function was obtained based on the frequency, and the learning data of ANN (Artificial Neural Network) was concluded, following which the normal and fault conditions of the bearing were classified.

Investigation on the Unsteadiness of a Low Reynolds Number Confined Impinging Jet using POD Analysis (POD 기법을 이용한 저 레이놀즈 수 충돌 제트의 비정상 거동 연구)

  • An, Nam-Hyun;Lee, In-Won
    • Journal of the Korean Society of Visualization
    • /
    • v.6 no.1
    • /
    • pp.34-40
    • /
    • 2008
  • The flow characteristics in a confined slot jet impinging on a flat plate were investigated in low Reynolds number regime (Re$\leq$1,000) by using time-resolved particle image velocimetry technique. The jet Reynolds number was varied from 404 to 1026, where it is presumed that the transient regime exists. It is found that the vortical structures in the shear layer are developed with increasing Reynolds number and that the jet remains steady at the Reynolds number of 404. Vortical structures and their temporal evolution are verified and the results were compared with previous numerical studies.

Geometrical Feature-Based Detection of Pure Facial Regions (기하학적 특징에 기반한 순수 얼굴영역 검출기법)

  • 이대호;박영태
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.7_8
    • /
    • pp.773-779
    • /
    • 2003
  • Locating exact position of facial components is a key preprocessing for realizing highly accurate and reliable face recognition schemes. In this paper, we propose a simple but powerful method for detecting isolated facial components such as eyebrows, eyes, and a mouth, which are horizontally oriented and have relatively dark gray levels. The method is based on the shape-resolving locally optimum thresholding that may guarantee isolated detection of each component. We show that pure facial regions can be determined by grouping facial features satisfying simple geometric constraints on unique facial structure. In the test for over 1000 images in the AR -face database, pure facial regions were detected correctly for each face image without wearing glasses. Very few errors occurred in the face images wearing glasses with a thick frame because of the occluded eyebrow -pairs. The proposed scheme may be best suited for the later stage of classification using either the mappings or a template matching, because of its capability of handling rotational and translational variations.

Proposing the Methods for Accelerating Computational Time of Large-Scale Commute Time Embedding (대용량 컴뮤트 타임 임베딩을 위한 연산 속도 개선 방식 제안)

  • Hahn, Hee-Il
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.2
    • /
    • pp.162-170
    • /
    • 2015
  • Commute time embedding involves computing the spectral decomposition of the graph Laplacian. It requires the computational burden proportional to $o(n^3)$, not suitable for large scale dataset. Many methods have been proposed to accelerate the computational time, which usually employ the Nystr${\ddot{o}}$m methods to approximate the spectral decomposition of the reduced graph Laplacian. They suffer from the lost of information by dint of sampling process. This paper proposes to reduce the errors by approximating the spectral decomposition of the graph Laplacian using that of the affinity matrix. However, this can not be applied as the data size increases, because it also requires spectral decomposition. Another method called approximate commute time embedding is implemented, which does not require spectral decomposition. The performance of the proposed algorithms is analyzed by computing the commute time on the patch graph.

Reliability-based Design Optimization using Multiplicative Decomposition Method (곱분해기법을 이용한 신뢰성 기반 최적설계)

  • Kim, Tae-Kyun;Lee, Tae-Hee
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.22 no.4
    • /
    • pp.299-306
    • /
    • 2009
  • Design optimization is a method to find optimum point which minimizes the objective function while satisfying design constraints. The conventional optimization does not consider the uncertainty originated from modeling or manufacturing process, so optimum point often locates on the boundaries of constraints. Reliability based design optimization includes optimization technique and reliability analysis that calculates the reliability of the system. Reliability analysis can be classified into simulation method, fast probability integration method, and moment-based reliability method. In most generally used MPP based reliability analysis, which is one of fast probability integration method, if many MPP points exist, cost and numerical error can increase in the process of transforming constraints into standard normal distribution space. In this paper, multiplicative decomposition method is used as a reliability analysis for RBDO, and sensitivity analysis is performed to apply gradient based optimization algorithm. To illustrate whole process of RBDO mathematical and engineering examples are illustrated.

An Analysis of Research Trends in Computational Thinking using Text Mining Technique (텍스트 마이닝 기법을 활용한 컴퓨팅 사고력 연구 동향 분석)

  • Lee, Jaeho;Jang, Junhyung
    • Journal of The Korean Association of Information Education
    • /
    • v.23 no.6
    • /
    • pp.543-550
    • /
    • 2019
  • In 2006, Janet Wing defined computational thinking and operated SW education as a formal curriculum in the UK in 2013. This study collected related research papers by using computational thinking, which has recently increased in importance, and analyzed it using text mining. In the first, CONCOR analysis was conducted with the keyword of computational thinking. In the second, text mining of the components of computational thinking was selected by the repr23esentative academic journals at domestic and foreign. As a result of the two-time analysis, first, abstraction, algorithm, data processing, problem decomposition, and pattern recognition were the core of the study of computational thinking component. Second, research on convergence education centered on computational thinking and science and mathematics subjects was actively conducted. Third, research on computational thinking has been expanding since 2010. Research and development of the classification and definition of computational thinking and components and applying them to education sites should be conducted steadily.

Study on the Scenario Earthquake Determining Methods Based on the Probabilistic Seismic Hazard Analysis (확률론적 지진재해도를 이용한 시나리오 지진의 결정기법에 관한 연구)

  • Choi, In-Kil;Nakajima, Masato;Choun, Young-Sun;Yun, Kwan-Hee
    • Journal of the Earthquake Engineering Society of Korea
    • /
    • v.8 no.6 s.40
    • /
    • pp.23-29
    • /
    • 2004
  • The design earthquake used for the seismic analysis and design of NPP (Nuclear Power Plant) is determined by the deterministic or probabilistic methods. The probabilistic seismic hazard analysis(PSHA) for the nuclear power plant sites was performed for the probabilistic seismic risk assessment. The probabilistic seismic hazard analysis for the nuclear power plant site had been completed as a part of the probabilistic seismic risk assessment. The probabilistic method become a resonable method to determine the design earthquakes for NPPs. In this study, the defining method of the probability based scenario earthquake was established, and as a sample calculation, the probability based scenario earthquakes were estimated by the de-aggregation of the probabilistic seismic hazard. By using this method, it is possible to define the probability based scenario earthquakes for the seismic design and seismic safety evaluation of structures. It is necessary to develop the rational seismic source map and the attenuation equations for the development of reasonable scenario earthquakes.

Millimeter-Wave(W-Band) Forward-Looking Super-Resolution Radar Imaging via Reweighted ℓ1-Minimization (재가중치 ℓ1-최소화를 통한 밀리미터파(W밴드) 전방 관측 초해상도 레이다 영상 기법)

  • Lee, Hyukjung;Chun, Joohwan;Song, Sungchan
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.28 no.8
    • /
    • pp.636-645
    • /
    • 2017
  • A scanning radar is exploited widely such as for ground surveillance, disaster rescue, and etc. However, the range resolution is limited by transmitted bandwidth and cross-range resolution is limited by beam width. In this paper, we propose a method for super-resolution radar imaging. If the distribution of reflectivity is sparse, the distribution is called sparse signal. That is, the problem could be formulated as compressive sensing problem. In this paper, 2D super-resolution radar image is generated via reweighted ${\ell}_1-Minimization$. In the simulation results, we compared the images obtained by the proposed method with those of the conventional Orthogonal Matching Pursuit(OMP) and Synthetic Aperture Radar(SAR).

A Study on Signal Sub Spatial Method for Removing Noise and Interference of Mobile Target (이동 물체의 잡음과 간섭제거를 위한 신호 부 공간기법에 대한 연구)

  • Lee, Min-Soo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.8 no.3
    • /
    • pp.224-228
    • /
    • 2015
  • In this paper, we study the method for desired signals estimation that array antennas are received signals. We apply sub spatial method of direction of arrival algorithm and adaptive array antennas in order to remove interference and noise signal of received antenna signals. Array response vector of adaptive array antenna is probability, it is correctly estimation of direction of arrival of targets to update weight signal. Desired signals are estimated updating covariance matrix after moving interference and noise signals among received signals. We estimate signals using eigen decomposition and eigen value, high resolution direction of arrival estimation algorithm is devided signal sub spatial and noise sub spatial. Though simulation, we analyze to compare proposed method with general method.

A Performance Evaluation of Factors Influencing the ROI Coding Quality in JPEG2000 (JPEG2000에서 ROI 코딩 품질에 영향을 미치는 요소의 성능 평가)

  • Ki Jun-Kang;Kim Hyun-Joo;Lee Jum-Sook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.11 no.4 s.42
    • /
    • pp.197-206
    • /
    • 2006
  • One of the most significant characteristics of JPEG2000. the emerging still image standards. is the ROI (Region of Interest) coding. JPEG2000 provides a number of ROI coding mechanisms and ROI parameters. To apply them to an application, it must select the applicable values. In this paper, we evaluate how the ROI coding mechanisms and the ROI parameters influencing JPEG2000 qualify affect the ROI quality and the whole image quality. The ROI coding mechanisms are Maxshift and Implicit. and the parameters are tile size and ROI size, codeblock size, number of DWT decomposition levels and ROI importance. The bigger the tile size, the better the quality. The bigger the ROI size, the ROI importance and the number of DWT decomposition levels, the worse the qualify. In code block $32{\times}32$ of Maxshift and Implicit, it has the best qualify.

  • PDF