• Title/Summary/Keyword: robust to image translation

Search Result 38, Processing Time 0.028 seconds

A Study on the Translation Invariant Matching Algorithm for Fingerprint Recognition (위치이동에 무관한 지문인식 정합 알고리즘에 관한 연구)

  • Kim, Eun-Hee;Cho, Seong-Won;Kim, Jae-Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.51 no.2
    • /
    • pp.61-68
    • /
    • 2002
  • This paper presents a new matching algorithm for fingerprint recognition, which is robust to image translation. The basic idea of this paper is to estimate the translation vector of an imput fingerprint image using N minutiae at which the gradient of the ridge direction field is large. Using the estimated translation vector we select minutiae irrelevant to the translation. We experimentally prove that the presented algorithm results in good performance even if there are large translation and pseudo-minutiae.

Robust Digital Image Watermarking Based on MTF of HVS (인간 시각의 MTF에 기반한 견고한 디지털 영상 워터마킹)

  • 홍수기;조상현;최흥문
    • Proceedings of the IEEK Conference
    • /
    • 2000.06d
    • /
    • pp.114-117
    • /
    • 2000
  • In this paper, we proposed robust digital image watermarking based on modulation transfer function (MTF) of human visual system (HVS). Using the proposed method, robust watermarking is possible both in common image processing operations such as cropping and lossy compression and in geometrical transforms such as rotation, scaling, and translation, because it can embed watermark and template signal maximally using MTF of HVS. Experimental results show that the proposed watermarking method is more robust to several common image processing operations and geometrical transforms.

  • PDF

A Robust Fingerprint Matching System Using Orientation Features

  • Kumar, Ravinder;Chandra, Pravin;Hanmandlu, Madasu
    • Journal of Information Processing Systems
    • /
    • v.12 no.1
    • /
    • pp.83-99
    • /
    • 2016
  • The latest research on the image-based fingerprint matching approaches indicates that they are less complex than the minutiae-based approaches when it comes to dealing with low quality images. Most of the approaches in the literature are not robust to fingerprint rotation and translation. In this paper, we develop a robust fingerprint matching system by extracting the circular region of interest (ROI) of a radius of 50 pixels centered at the core point. Maximizing their orientation correlation aligns two fingerprints that are to be matched. The modified Euclidean distance computed between the extracted orientation features of the sample and query images is used for matching. Extensive experiments were conducted over four benchmark fingerprint datasets of FVC2002 and two other proprietary databases of RFVC 2002 and the AITDB. The experimental results show the superiority of our proposed method over the well-known image-based approaches in the literature.

Robust Watermarking Algorithm for 3D Mesh Models (3차원 메쉬 모델을 위한 강인한 워터마킹 기법)

  • 송한새;조남익;김종원
    • Journal of Broadcast Engineering
    • /
    • v.9 no.1
    • /
    • pp.64-73
    • /
    • 2004
  • A robust watermarking algorithm is proposed for 3D mesh models. Watermark is inserted into the 2D image which is extracted from the target 3D model. Each Pixel value of the extracted 2D image represents a distance from the predefined reference points to the face of the given 3D model. This extracted image is defined as “range image” in this paper. Watermark is embedded into the range image. Then, watermarked 3D mesh is obtained by modifying vertices using the watermarked range Image. In extraction procedure, the original model is needed. After registration between the original and the watermarked models, two range images are extracted from each 3D model. From these images. embedded watermark is extracted. Experimental results show that the proposed algorithm is robust against the attacks such as rotation, translation, uniform scaling, mesh simplification, AWGN and quantization of vertex coordinates.

Robust PCB Image Alignment using SIFT (잡음과 회전에 강인한 SIFT 기반 PCB 영상 정렬 알고리즘 개발)

  • Kim, Jun-Chul;Cui, Xue-Nan;Park, Eun-Soo;Choi, Hyo-Hoon;Kim, Hak-Il
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.7
    • /
    • pp.695-702
    • /
    • 2010
  • This paper presents an image alignment algorithm for application of AOI (Automatic Optical Inspection) based on SIFT. Since the correspondences result using SIFT descriptor have many wrong points for aligning, this paper modified and classified those points by five measures called the CCFMR (Cascade Classifier for False Matching Reduction) After reduced the false matching, rotation and translation are estimated by point selection method. Experimental results show that the proposed method has fewer fail matching in comparison to commercial software MIL 8.0, and specially, less than twice with the well-controlled environment’s data sets (such as AOI system). The rotation and translation accuracy is robust than MIL in the noise data sets, but the errors are higher than in a rotation variation data sets although that also meaningful result in the practical system. In addition to, the computational time consumed by the proposed method is four times shorter than that by MIL which increases linearly according to noise.

HVS Model-based Watermarking Robust to Lossy Compression, Cropping, and Scaling (유손실 압축, 잘라내기 및 신축에 대해 견고한 HVS 모델 기반 워터마킹)

  • Hong, Su-Gi;Jo, Sang-Hyeon;Choe, Heung-Mun
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.38 no.5
    • /
    • pp.548-555
    • /
    • 2001
  • In this paper, we proposed a HVS(human visual system) model-based digital image watermarking which is not only invariant to rotation and translation but also more robust to lossy compression, cropping, and scaling as compared to the conventional method. Fourier transform and log-polar mapping is used to make the proposed algorithm invariant to rotation and translation, and in addition, watermark energy is embedded maximally based on spatial frequency sensitivity of HVS without the deterioration of the invisibility. As a result, the robustness of watermarking is improved both in general image processing operations such as cropping, low pass filtering, and lossy compression and in geometrical transforms such as rotation, translation, and scaling. And, by disjoint embedding of the watermark and the template without intersection, the deterioration of invisibility and robustness is prevented. Experimental results show that proposed watermarking is about 30~75 [%] more robust af compared to the conventional methods.

  • PDF

CLASSIFIED ELGEN BLOCK: LOCAL FEATURE EXTRACTION AND IMAGE MATCHING ALGORITHM

  • Hochul Shin;Kim, Seong-Dae
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2108-2111
    • /
    • 2003
  • This paper introduces a new local feature extraction method and image matching method for the localization and classification of targets. Proposed method is based on the block-by-block projection associated with directional pattern of blocks. Each pattern has its own eigen-vertors called as CEBs(Classified Eigen-Blocks). Also proposed block-based image matching method is robust to translation and occlusion. Performance of proposed feature extraction and matching method is verified by the face localization and FLIR-vehicle-image classification test.

  • PDF

Robust Lane Detection Algorithm for Autonomous Trucks in Container Terminal

  • Ngo Quang Vinh;Sam-Sang You;Le Ngoc Bao Long;Hwan-Seong Kim
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.252-253
    • /
    • 2023
  • Container terminal automation might offer many potential benefits, such as increased productivity, reduced cost, and improved safety. Autonomous trucks can lead to more efficient container transport. A robust lane detection method is proposed using score-based generative modeling through stochastic differential equations for image-to-image translation. Image processing techniques are combined with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Genetic Algorithm (GA) to ensure lane positioning robustness. The proposed method is validated by a dataset collected from the port terminals under different environmental conditions and tested the robustness of the lane detection method with stochastic noise.

  • PDF

A Novel Iris recognition method robust to noises and translation (잡음과 위치이동에 강인한 새로운 홍채인식 기법)

  • Won, Jung-Woo;Kim, Jae-Min;Cho, Sung-Won;Choi, Kyung-Sam;Choi, Jin-Su
    • Proceedings of the KIEE Conference
    • /
    • 2003.11c
    • /
    • pp.392-395
    • /
    • 2003
  • This paper describes a new iris segmentation and recognition method, which is robust to noises. Combining statistical classification and elastic boundary fitting, the iris is first segmented. Then, the localized iris image is smoothed by a convolution with a Gaussian function, down-sampled by a factor of filtered with a Laplacian operator, and quantized using the Lloyd-Max method. Since the quantized output is sensitive to a small shift of the full-resolution iris image, the outputs of the Laplacian operator are computed for all space shifts. The quantized output with maximum entropy is selected as the final feature representation. An appropriate formulation of similarity measure is defined for the classification of the quantized output. Experimentally we showed that the proposed method produces superb performance in iris segmentation and recognition.

  • PDF

Eyeglass Remover Network based on a Synthetic Image Dataset

  • Kang, Shinjin;Hahn, Teasung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.4
    • /
    • pp.1486-1501
    • /
    • 2021
  • The removal of accessories from the face is one of the essential pre-processing stages in the field of face recognition. However, despite its importance, a robust solution has not yet been provided. This paper proposes a network and dataset construction methodology to remove only the glasses from facial images effectively. To obtain an image with the glasses removed from an image with glasses by the supervised learning method, a network that converts them and a set of paired data for training is required. To this end, we created a large number of synthetic images of glasses being worn using facial attribute transformation networks. We adopted the conditional GAN (cGAN) frameworks for training. The trained network converts the in-the-wild face image with glasses into an image without glasses and operates stably even in situations wherein the faces are of diverse races and ages and having different styles of glasses.