• 제목/요약/키워드: robust to image translation

검색결과 38건 처리시간 0.016초

위치이동에 무관한 지문인식 정합 알고리즘에 관한 연구 (A Study on the Translation Invariant Matching Algorithm for Fingerprint Recognition)

  • 김은희;조성원;김재민
    • 대한전기학회논문지:시스템및제어부문D
    • /
    • 제51권2호
    • /
    • pp.61-68
    • /
    • 2002
  • This paper presents a new matching algorithm for fingerprint recognition, which is robust to image translation. The basic idea of this paper is to estimate the translation vector of an imput fingerprint image using N minutiae at which the gradient of the ridge direction field is large. Using the estimated translation vector we select minutiae irrelevant to the translation. We experimentally prove that the presented algorithm results in good performance even if there are large translation and pseudo-minutiae.

인간 시각의 MTF에 기반한 견고한 디지털 영상 워터마킹 (Robust Digital Image Watermarking Based on MTF of HVS)

  • 홍수기;조상현;최흥문
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2000년도 하계종합학술대회 논문집(4)
    • /
    • pp.114-117
    • /
    • 2000
  • In this paper, we proposed robust digital image watermarking based on modulation transfer function (MTF) of human visual system (HVS). Using the proposed method, robust watermarking is possible both in common image processing operations such as cropping and lossy compression and in geometrical transforms such as rotation, scaling, and translation, because it can embed watermark and template signal maximally using MTF of HVS. Experimental results show that the proposed watermarking method is more robust to several common image processing operations and geometrical transforms.

  • PDF

A Robust Fingerprint Matching System Using Orientation Features

  • Kumar, Ravinder;Chandra, Pravin;Hanmandlu, Madasu
    • Journal of Information Processing Systems
    • /
    • 제12권1호
    • /
    • pp.83-99
    • /
    • 2016
  • The latest research on the image-based fingerprint matching approaches indicates that they are less complex than the minutiae-based approaches when it comes to dealing with low quality images. Most of the approaches in the literature are not robust to fingerprint rotation and translation. In this paper, we develop a robust fingerprint matching system by extracting the circular region of interest (ROI) of a radius of 50 pixels centered at the core point. Maximizing their orientation correlation aligns two fingerprints that are to be matched. The modified Euclidean distance computed between the extracted orientation features of the sample and query images is used for matching. Extensive experiments were conducted over four benchmark fingerprint datasets of FVC2002 and two other proprietary databases of RFVC 2002 and the AITDB. The experimental results show the superiority of our proposed method over the well-known image-based approaches in the literature.

3차원 메쉬 모델을 위한 강인한 워터마킹 기법 (Robust Watermarking Algorithm for 3D Mesh Models)

  • 송한새;조남익;김종원
    • 방송공학회논문지
    • /
    • 제9권1호
    • /
    • pp.64-73
    • /
    • 2004
  • 본 논문에서는 3차원 메쉬 모델에 적용되는 강인한 워터마킹 알고리듬을 제안한다. 제안하는 알고리듬에서 워터마크는 3차원 모델로부터 추출된 2차원 영상에 삽입된다. 이 2차원 영상의 화소 값은 정해진 기준점들로부터 3차원 모델의 표면까지의 거리이며, 이를 거리 영상이라 한다. 워터마크는 거리 영상에 삽입되며, 워터마크된 거리 영상을 이용하여 3차원 모델의 꼭지점 좌표를 변경하면 워터마크가 삽입된 3차원 모델을 얻는다. 워터마크의 추출은 워터마크가 삽입된 모델로부터 거리영상을 얻고, 이 영상에서 워터마크를 추출한다. 워터마크 추출에는 원본 모델이 필요하며 원본과의 정합이 필요하다. 실험을 통해 제안하는 알고리듬이 회전, 병진, 비례조절, 가우스 잡음, 메쉬 간략화, 꼭지점 양자화에 강인함을 확인하였다.

잡음과 회전에 강인한 SIFT 기반 PCB 영상 정렬 알고리즘 개발 (Robust PCB Image Alignment using SIFT)

  • 김준철;최학남;박은수;최효훈;김학일
    • 제어로봇시스템학회논문지
    • /
    • 제16권7호
    • /
    • pp.695-702
    • /
    • 2010
  • This paper presents an image alignment algorithm for application of AOI (Automatic Optical Inspection) based on SIFT. Since the correspondences result using SIFT descriptor have many wrong points for aligning, this paper modified and classified those points by five measures called the CCFMR (Cascade Classifier for False Matching Reduction) After reduced the false matching, rotation and translation are estimated by point selection method. Experimental results show that the proposed method has fewer fail matching in comparison to commercial software MIL 8.0, and specially, less than twice with the well-controlled environment’s data sets (such as AOI system). The rotation and translation accuracy is robust than MIL in the noise data sets, but the errors are higher than in a rotation variation data sets although that also meaningful result in the practical system. In addition to, the computational time consumed by the proposed method is four times shorter than that by MIL which increases linearly according to noise.

유손실 압축, 잘라내기 및 신축에 대해 견고한 HVS 모델 기반 워터마킹 (HVS Model-based Watermarking Robust to Lossy Compression, Cropping, and Scaling)

  • 홍수기;조상현;최흥문
    • 대한전자공학회논문지SP
    • /
    • 제38권5호
    • /
    • pp.548-555
    • /
    • 2001
  • 본 논문에서는 인간시각시스템 모델에 기반하여 회전과 이동은 물론 유손실 압축과 잘라내기 및 신축에 대해서도 견고한 디지털 영상 워터마킹을 제안하였다. 기존의 퓨리에 변환과 로그폴라맵핑을 이용하여 회전및 이동에 불변하도록함과 동시에 인간시각스템 모델의 공간주파수 감도를 감안하여 비가시성이 떨어지지 않는 범위 내에서 워터마크 에너지를 최대한 인가하고, 각 워터마크 비트들의 정보를 임의 섞음하여 전 영역에 분산 삽입하였다. 이와 같이 함으로써 잘라내기와 유손실 압축과 같은 영상처리나 회전, 이동 및 신축과 같은 기하학적 변형 등에 대해 워터마크 견고성을 더욱 향상시켰다. 또한 워터마크와 템플릿을 교차하지 않게 삽입함으로써 이들의 교차로 인한 비가시성과 견고성의 저하를 방지하였다. 실험을 통하여 제안한 워터마킹 방법은 유손실 압축, 잘라내기 및 신축에 대하여 기존방법보다 약 30∼75 [%]정도 더 견고성이 향상됨을 확인하였다.

  • PDF

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2003년도 하계종합학술대회 논문집 Ⅳ
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminal

  • Ngo Quang Vinh;Sam-Sang You;Le Ngoc Bao Long;Hwan-Seong Kim
    • 한국항해항만학회:학술대회논문집
    • /
    • 한국항해항만학회 2023년도 춘계학술대회
    • /
    • pp.252-253
    • /
    • 2023
  • Container terminal automation might offer many potential benefits, such as increased productivity, reduced cost, and improved safety. Autonomous trucks can lead to more efficient container transport. A robust lane detection method is proposed using score-based generative modeling through stochastic differential equations for image-to-image translation. Image processing techniques are combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Genetic Algorithm (GA) to ensure lane positioning robustness. The proposed method is validated by a dataset collected from the port terminals under different environmental conditions and tested the robustness of the lane detection method with stochastic noise.

  • PDF

잡음과 위치이동에 강인한 새로운 홍채인식 기법 (A Novel Iris recognition method robust to noises and translation)

  • 원정우;김재민;조성원;최경삼;최진수
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2003년도 학술회의 논문집 정보 및 제어부문 B
    • /
    • pp.392-395
    • /
    • 2003
  • This paper describes a new iris segmentation and recognition method, which is robust to noises. Combining statistical classification and elastic boundary fitting, the iris is first segmented. Then, the localized iris image is smoothed by a convolution with a Gaussian function, down-sampled by a factor of filtered with a Laplacian operator, and quantized using the Lloyd-Max method. Since the quantized output is sensitive to a small shift of the full-resolution iris image, the outputs of the Laplacian operator are computed for all space shifts. The quantized output with maximum entropy is selected as the final feature representation. An appropriate formulation of similarity measure is defined for the classification of the quantized output. Experimentally we showed that the proposed method produces superb performance in iris segmentation and recognition.

  • PDF

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권4호
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.