• Title/Summary/Keyword: 공간 분할 기법

Search Result 654, Processing Time 0.034 seconds

Head Pose Estimation Using Error Compensated Singular Value Decomposition for 3D Face Recognition (3차원 얼굴 인식을 위한 오류 보상 특이치 분해 기반 얼굴 포즈 추정)

  • 송환종;양욱일;손광훈
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.6
    • /
    • pp.31-40
    • /
    • 2003
  • Most face recognition systems are based on 2D images and applied in many applications. However, it is difficult to recognize a face when the pose varies severely. Therefore, head pose estimation is an inevitable procedure to improve recognition rate when a face is not frontal. In this paper, we propose a novel head pose estimation algorithm for 3D face recognition. Given the 3D range image of an unknown face as an input, we automatically extract facial feature points based on the face curvature. We propose an Error Compensated Singular Value Decomposition (EC-SVD) method based on the extracted facial feature points. We obtain the initial rotation angle based on the SVD method, and perform a refinement procedure to compensate for remained errors. The proposed algorithm is performed by exploiting the extracted facial features in the normaized 3D face space. In addition, we propose a 3D nearest neighbor classifier in order to select face candidates for 3D face recognition. From simulation results, we proved the efficiency and validity of the proposed algorithm.

The YIQ Model of Computed Tomography Color Image Variable Block with Fractal Image Coding (전산화단층촬영 칼라영상의 YIQ모델을 가변블록 이용한 프랙탈 영상 부호화)

  • Park, Jae-Hong;Park, Cheol-Woo
    • Journal of the Korean Society of Radiology
    • /
    • v.10 no.4
    • /
    • pp.263-270
    • /
    • 2016
  • This paper suggests techniques to enhance coding time which is a problem in traditional fractal compression and to improve fidelity of reconstructed images by determining fractal coefficient through adaptive selection of block approximation formula. First, to reduce coding time, we construct a linear list of domain blocks of which characteristics is given by their luminance and variance and then we control block searching time according to the first permissible threshold value. Next, when employing three-level block partition, if a range block of minimum partition level cannot find a domain block which has a satisfying approximation error, There applied to 24-bpp color image compression and image techniques. The result did not occur a loss in the image quality of the image when using the encoding method, such as almost to the color in the YIQ image compression rate and image quality, such as RGB images and showed good.

Failure Analysis on Scale Formation of Thermostat Housing and Development of Accelerated Test Methodology (써모스타트 하우징의 침전물 생성에 관한 고장분석 및 가속시험법 개발)

  • Cho, In-Hee;Hyung, Sin-Jong;Choi, Kil-Yeong;Weon, Jong-Il
    • Applied Chemistry for Engineering
    • /
    • v.20 no.2
    • /
    • pp.177-185
    • /
    • 2009
  • The failure analysis of scales deposited on automotive thermostat housing has been carried out. Observations using energy dispersive spectroscopy and electron probe micro analyzer indicate that the main components of scales are some of additives of coolant used. For a detailed investigation of organic matters pyrolysis-GC/MS is employed. The result shows that the main organic component is benzoic acid and furthermore, a small amount of acetophenone, benzene and phenyl group is detected. Based on the results of failure analysis performed, the scales on automotive thermostat housing appear due to the deposition of coolant components, followed by crevice corrosion, into gap between housing and rubber horse. New accelerated test methodology, which could mimic the scale formation and the crevice corrosion on thermostat housing, is developed considering the above results. In order to reproduce the real operating conditions, the accelerating factors, i.e. temperature and humidity, are changed and programmed. The reproducibility of the accelerated test proposed is confirmed after analyzing the scales obtained from the accelerated test.

An Efficient Zone Reconstruction Method on the Zone-Structured Disk (Zone 구조 디스크에서 효율적 구역 재구성 방법)

  • Kim, Jong-Hui;Choe, Gyeong-Hui;Jeong, Gi-Hyeon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.2
    • /
    • pp.274-281
    • /
    • 1999
  • Many popularly used recent high-sped and high-capacity disks use the Zone-Structured in which a disk consists of multiple zones. A zone-structured disk has multiple zones, and the bandwidth and the number of sectors in each zone are different from each other. When the previous studies that modeled disks based on the assumption that the number of sectors in all tracks are same are applied to the zone-structured disk and the video data stored in a round-robin manner are read out with the SCAN technique, the transfer rate of the disk is fixed with that of the innermost zone. And excessive disk space is wasted either. To resolve the problem, this paper proposes a method for reconstructing the physical zones into logical zones by split and merge operations and develops a method for determining optimum transfer block size.

  • PDF

Drought Analysis and Assessment Using Land Surface Model on South Korea (지표수문해석모형을 이용한 국내 가뭄해석 및 평가)

  • Son, Kyung-Hwan;Lee, Moon-Hwan;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2011.05a
    • /
    • pp.53-53
    • /
    • 2011
  • 가뭄은 강수 부족 및 온도 상승에 따른 물 수지의 불균형으로, 그 특성상 점진적이고 홍수에 비해 피해규모가 광범위 하여 효율적인 대처방안을 마련하기가 어려운 특성을 가지고 있다. 현재 국내의 경우 가뭄관리를 위해 비구조적 대책 방안인 가뭄지수를 활용하여 해당 지역의 부족한 용수의 정도를 시 공간적으로 측정하고 크기와 강도에 대한 정량적 또는 정성적인 평가를 수행하고 있다. 그러나 대부분 강수 및 기온자료를 토대로 한 평가가 주를 이루고 있으며, 그나마 제공되는 지수들의 경우 가뭄을 나타내는 기준이 상이하여 사용자에게 많은 혼란을 가중시키고 있는 실정이다. 따라서 효율적인 가뭄관리를 위해서는 장주기 기상정보를 토대로 국가 또는 권역별 가뭄감시가 이루어져야하며, 기상 분만 아니라 지표와의 물 수지 해석이 반영된 수문정보(유량, 토양수분 등) 기반의 가뭄 정보가 생산되어야 할 것이다. 본 연구에서는 전지구 수문해석이 가능한 지표수문해석모형을 활용하여 남한에 대한 수문성분 기반의 가뭄평가를 수행하고자 한다. 우선 남한 전역에 대한 기상 및 지형 정보를 구축하고 지표수문해석모형에 적용하여 격자별 수문성분을 생산하였다. 수문성분은 가뭄평가에 필요한 정보로 전환되어야 하며, 본 연구에서는 빈도해석기법을 적용하여 가뭄에 대한 발생 빈도 및 규모를 정량화 하였다. 즉, 모형에서 산정된 수문정보로 부터 빈도해석을 수행하여 적정 확률분포형을 결정한 후, 해당기간에 대한 확률값을 산정하여 과거 대비 가뭄에 대한 여부를 판단하였다. 산정된 지수에 대한 평가를 위해 국내 과거 가뭄기록사례를 조사 및 기존 가뭄지수인 SPI 및 PDSI를 활용하였다. 평가 방법은 시계열 및 지역별 분석과 유역별 물수지 분석으로 구분되며, 주로 가뭄기간동안의 가뭄심도와 가뭄 발생 및 해갈에 따른 재현여부를 평가하였다. 평가 결과 가뭄발생 및 해갈시기 그리고 피해지역에 대한 표현에 있어 기록된 사항을 적절히 반영하는 것으로 나타났으며, 기존 가뭄지수 보다 가뭄 재현에 있어 비교적 신뢰성이 높은 것으로 확인되었다. 따라서 지표수문해석모형 기반의 가뭄평가의 경우 적용성이 우수한 것으로 판단되며, 이상의 연구결과는 향후 국내 및 동아시아 가뭄감시 전망에 있어 기초자료로 활용될 것이다.

  • PDF

Rebuilding of Image Compression Algorithm Based on the DCT (discrete cosine transform) (이산코사인변환 기반 이미지 압축 알고리즘에 관한 재구성)

  • Nam, Soo-Tai;Jin, Chan-Yong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.23 no.1
    • /
    • pp.84-89
    • /
    • 2019
  • JPEG is a most widely used standard image compression technology. This research introduces the JPEG image compression algorithm and describes each step in the compression and decompression. Image compression is the application of data compression on digital images. The DCT (discrete cosine transform) is a technique for converting a time domain to a frequency domain. First, the image is divided into 8 by 8 pixel blocks. Second, working from top to bottom left to right, the DCT is applied to each block. Third, each block is compressed through quantization. Fourth, the matrix of compressed blocks that make up the image is stored in a greatly reduced amount of space. Finally if desired, the image is reconstructed through decompression, a process using IDCT (inverse discrete cosine transform). The purpose of this research is to review all the processes of image compression / decompression using the discrete cosine transform method.

Implementation of an Algorithm that Generates Minimal Spanning Ladders and Exploration on its relevance with Computational Thinking (최소생성사다리를 생성하는 알고리즘 구현 및 컴퓨팅 사고력과의 관련성 탐구)

  • Jun, Youngcook
    • The Journal of Korean Association of Computer Education
    • /
    • v.21 no.6
    • /
    • pp.39-47
    • /
    • 2018
  • This paper dealt with investigating the number of minimal spanning ladders originated from ladder game and their properties as well as the related computational thinking aspects. The author modified the filtering techniques to enhance Mathematica project where a new type of graph was generated based on the algorithm using a generator of firstly found minimal spanning graph by repeatedly applying independent ladder operator to a subsequence of ladder sequence. The newly produced YC graphs had recursive and hierarchical graph structures and showed the properties of edge-symmetric. As the computational complexity increased the author divided the whole search space into the each floor of the newly generated minimal spanning graphs for the (5, 10) YC graph and the higher (6, 15) YC graph. It turned out that the computational thinking capabilities such as data visualization, abstraction, and parallel computing with Mathematica contributed to enumerating the new YC graphs in order to investigate their structures and properties.

Simulation Study on Search Strategies for the Reconnaissance Drone (정찰 드론의 탐색 경로에 대한 시뮬레이션 연구)

  • Choi, Min Woo;Cho, Namsuk
    • Journal of the Korea Society for Simulation
    • /
    • v.28 no.1
    • /
    • pp.23-39
    • /
    • 2019
  • The use of drone-bots is demanded in times regarding the reduction of military force, the spread of the life-oriented thought, and the use of innovative technology in the defense through the fourth industrial revolution. Especially, the drone's surveillance and reconnaissance are expected to play a big role in the future battlefield. However, there are not many cases in which the concept of operation is studied scientifically. In this study, We propose search algorithms for reconnaissance drone through simulation analysis. In the simulation, the drone and target move linearly in continuous space, and the target is moving adopting the Random-walk concept to reflect the uncertainty of the battlefield. The research investigates the effectiveness of existing search methods such as Parallel and Spiral Search. We analyze the probabilistic analysis for detector radius and the speed on the detection probability. In particular, the new detection algorithms those can be used when an enemy moves toward a specific goal, PS (Probability Search) and HS (Hamiltonian Search), are introduced. The results of this study will have applicability on planning the path for the reconnaissance operations using drone-bots.

Detection of Settlement Areas from Object-Oriented Classification using Speckle Divergence of High-Resolution SAR Image (고해상도 SAR 위성영상의 스페클 divergence와 객체기반 영상분류를 이용한 주거지역 추출)

  • Song, Yeong Sun
    • Journal of Cadastre & Land InformatiX
    • /
    • v.47 no.2
    • /
    • pp.79-90
    • /
    • 2017
  • Urban environment represent one of the most dynamic regions on earth. As in other countries, forests, green areas, agricultural lands are rapidly changing into residential or industrial areas in South Korea. Monitoring such rapid changes in land use requires rapid data acquisition, and satellite imagery can be an effective method to this demand. In general, SAR(Synthetic Aperture Radar) satellites acquire images with an active system, so the brightness of the image is determined by the surface roughness. Therefore, the water areas appears dark due to low reflection intensity, In the residential area where the artificial structures are distributed, the brightness value is higher than other areas due to the strong reflection intensity. If we use these characteristics of SAR images, settlement areas can be extracted efficiently. In this study, extraction of settlement areas was performed using TerraSAR-X of German high-resolution X-band SAR satellite and KOMPSAT-5 of South Korea, and object-oriented image classification method using the image segmentation technique is applied for extraction. In addition, to improve the accuracy of image segmentation, the speckle divergence was first calculated to adjust the reflection intensity of settlement areas. In order to evaluate the accuracy of the two satellite images, settlement areas are classified by applying a pixel-based K-means image classification method. As a result, in the case of TerraSAR-X, the accuracy of the object-oriented image classification technique was 88.5%, that of the pixel-based image classification was 75.9%, and that of KOMPSAT-5 was 87.3% and 74.4%, respectively.

An Electric Load Forecasting Scheme with High Time Resolution Based on Artificial Neural Network (인공 신경망 기반의 고시간 해상도를 갖는 전력수요 예측기법)

  • Park, Jinwoong;Moon, Jihoon;Hwang, Eenjun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.6 no.11
    • /
    • pp.527-536
    • /
    • 2017
  • With the recent development of smart grid industry, the necessity for efficient EMS(Energy Management System) has been increased. In particular, in order to reduce electric load and energy cost, sophisticated electric load forecasting and efficient smart grid operation strategy are required. In this paper, for more accurate electric load forecasting, we extend the data collected at demand time into high time resolution and construct an artificial neural network-based forecasting model appropriate for the high time resolution data. Furthermore, to improve the accuracy of electric load forecasting, time series data of sequence form are transformed into continuous data of two-dimensional space to solve that problem that machine learning methods cannot reflect the periodicity of time series data. In addition, to consider external factors such as temperature and humidity in accordance with the time resolution, we estimate their value at the time resolution using linear interpolation method. Finally, we apply the PCA(Principal Component Analysis) algorithm to the feature vector composed of external factors to remove data which have little correlation with the power data. Finally, we perform the evaluation of our model through 5-fold cross-validation. The results show that forecasting based on higher time resolution improve the accuracy and the best error rate of 3.71% was achieved at the 3-min resolution.