• 제목/요약/키워드: single-image detection

검색결과 358건 처리시간 0.022초

Block and Fuzzy Techniques Based Forensic Tool for Detection and Classification of Image Forgery

  • Hashmi, Mohammad Farukh;Keskar, Avinash G.
    • Journal of Electrical Engineering and Technology
    • /
    • 제10권4호
    • /
    • pp.1886-1898
    • /
    • 2015
  • In today’s era of advanced technological developments, the threats to the authenticity and integrity of digital images, in a nutshell, the threats to the Image Forensics Research communities have also increased proportionately. This happened as even for the ‘non-expert’ forgers, the availability of image processing tools has become a cakewalk. This image forgery poses a great problem for judicial authorities in any context of trade and commerce. Block matching based image cloning detection system is widely researched over the last 2-3 decades but this was discouraged by higher computational complexity and more time requirement at the algorithm level. Thus, for reducing time need, various dimension reduction techniques have been employed. Since a single technique cannot cope up with all the transformations like addition of noise, blurring, intensity variation, etc. we employ multiple techniques to a single image. In this paper, we have used Fuzzy logic approach for decision making and getting a global response of all the techniques, since their individual outputs depend on various parameters. Experimental results have given enthusiastic elicitations as regards various transformations to the digital image. Hence this paper proposes Fuzzy based cloning detection and classification system. Experimental results have shown that our detection system achieves classification accuracy of 94.12%. Detection accuracy (DAR) while in case of 81×81 sized copied portion the maximum accuracy achieved is 99.17% as regards subjection to transformations like Blurring, Intensity Variation and Gaussian Noise Addition.

자율주행 차량 영상 기반 객체 인식 인공지능 기술 현황 (Overview of Image-based Object Recognition AI technology for Autonomous Vehicles)

  • 임헌국
    • 한국정보통신학회논문지
    • /
    • 제25권8호
    • /
    • pp.1117-1123
    • /
    • 2021
  • 객체 인식이란 하나의 특정 이미지를 입력했을 때, 주어진 이미지를 분석하여 특정한 객체(object)의 위치(location)와 종류(class)를 파악하는 것이다. 최근 객체 인식 기술이 적극적으로 접목되는 분야 중 하나는 자율주행 차량이라 할 수 있고, 본 논문에서는 자율주행 차량에서 영상 기반의 객체 인식 인공지능 기술에 대해 기술한다. 영상 기반 객체 검출 알고리즘은 최근 두 가지 방법(단일 단계 검출 방법 및 두 단계 검출 방법)으로 좁혀지고 있는데, 이를 중심으로 분석 정리하고자 한다. 두 가지 검출 방법의 장단점을 분석 제시하고, 단일 단계 검출 방법에 속하는 YOLO/SSD 알고리즘과 두 단계 검출 방법에 속하는 R-CNN/Faster R-CNN 알고리즘에 대해 분석 기술한다. 이를 통해 자율주행에 필요한 각 객체 인식 응용에 적합한 알고리즘이 선별적으로 선택되어 연구개발 되어질 수 있기를 기대한다.

임베디드 보드에서 영상 처리 및 딥러닝 기법을 혼용한 돼지 탐지 정확도 개선 (Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board)

  • 유승현;손승욱;안한세;이세준;백화평;정용화;박대희
    • 한국멀티미디어학회논문지
    • /
    • 제25권4호
    • /
    • pp.583-599
    • /
    • 2022
  • Although the object detection accuracy with a single image has been significantly improved with the advance of deep learning techniques, the detection accuracy for pig monitoring is challenged by occlusion problems due to a complex structure of a pig room such as food facility. These detection difficulties with a single image can be mitigated by using a video data. In this research, we propose a method in pig detection for video monitoring environment with a static camera. That is, by using both image processing and deep learning techniques, we can recognize a complex structure of a pig room and this information of the pig room can be utilized for improving the detection accuracy of pigs in the monitored pig room. Furthermore, we reduce the execution time overhead by applying a pruning technique for real-time video monitoring on an embedded board. Based on the experiment results with a video data set obtained from a commercial pig farm, we confirmed that the pigs could be detected more accurately in real-time, even on an embedded board.

Multi-spectral Vehicle Detection based on Convolutional Neural Network

  • Choi, Sungil;Kim, Seungryong;Park, Kihong;Sohn, Kwanghoon
    • 한국멀티미디어학회논문지
    • /
    • 제19권12호
    • /
    • pp.1909-1918
    • /
    • 2016
  • This paper presents a unified framework for joint Convolutional Neural Network (CNN) based vehicle detection by leveraging multi-spectral image pairs. With the observation that under challenging environments such as night vision and limited light source, vehicle detection in a single color image can be more tractable by using additional far-infrared (FIR) image, we design joint CNN architecture for both RGB and FIR image pairs. We assume that a score map from joint CNN applied to overall image can be considered as confidence of vehicle existence. To deal with various scale ratios of vehicle candidates, multi-scale images are first generated scaling an image according to possible scale ratio of vehicles. The vehicle candidates are then detected on local maximal on each score maps. The generation of overlapped candidates is prevented with non-maximal suppression on multi-scale score maps. The experimental results show that our framework have superior performance than conventional methods with a joint framework of multi-spectral image pairs reducing false positive generated by conventional vehicle detection framework using only single color image.

컨볼루션 멀티블럭 HOG를 이용한 퍼지신경망 보행자 검출 방법 (A Neuro-Fuzzy Pedestrian Detection Method Using Convolutional Multiblock HOG)

  • 명근우;곡락도;임준식
    • 전기학회논문지
    • /
    • 제66권7호
    • /
    • pp.1117-1122
    • /
    • 2017
  • Pedestrian detection is a very important and valuable part of artificial intelligence and computer vision. It can be used in various areas for example automatic drive, video analysis and others. Many works have been done for the pedestrian detection. The accuracy of pedestrian detection on multiple pedestrian image has reached high level. It is not easily get more progress now. This paper proposes a new structure based on the idea of HOG and convolutional filters to do the pedestrian detection in single pedestrian image. It can be a method to increase the accuracy depend on the high accuracy in single pedestrian detection. In this paper, we use Multiblock HOG and magnitude of the pixel as the feature and use convolutional filter to do the to extract the feature. And then use NEWFM to be the classifier for training and testing. We use single pedestrian image of the INRIA data set as the data set. The result shows that the Convolutional Multiblock HOG we proposed get better performance which is 0.015 miss rate at 10-4 false positive than the other detection methods for example HOGLBP which is 0.03 miss rate and ChnFtrs which is 0.075 miss rate.

Anomaly detection of isolating switch based on single shot multibox detector and improved frame differencing

  • Duan, Yuanfeng;Zhu, Qi;Zhang, Hongmei;Wei, Wei;Yun, Chung Bang
    • Smart Structures and Systems
    • /
    • 제28권6호
    • /
    • pp.811-825
    • /
    • 2021
  • High-voltage isolating switches play a paramount role in ensuring the safety of power supply systems. However, their exposure to outdoor environmental conditions may cause serious physical defects, which may result in great risk to power supply systems and society. Image processing-based methods have been used for anomaly detection. However, their accuracy is affected by numerous uncertainties due to manually extracted features, which makes the anomaly detection of isolating switches still challenging. In this paper, a vision-based anomaly detection method for isolating switches, which uses the rotational angle of the switch system for more accurate and direct anomaly detection with the help of deep learning (DL) and image processing methods (Single Shot Multibox Detector (SSD), improved frame differencing method, and Hough transform), is proposed. The SSD is a deep learning method for object classification and localization. In addition, an improved frame differencing method is introduced for better feature extraction and a hough transform method is adopted for rotational angle calculation. A number of experiments are conducted for anomaly detection of single and multiple switches using video frames. The results of the experiments demonstrate that the SSD outperforms the You-Only-Look-Once network. The effectiveness and robustness of the proposed method have been proven under various conditions, such as different illumination and camera locations using 96 videos from the experiments.

Remote Distance Measurement from a Single Image by Automatic Detection and Perspective Correction

  • Layek, Md Abu;Chung, TaeChoong;Huh, Eui-Nam
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권8호
    • /
    • pp.3981-4004
    • /
    • 2019
  • This paper proposes a novel method for locating objects in real space from a single remote image and measuring actual distances between them by automatic detection and perspective transformation. The dimensions of the real space are known in advance. First, the corner points of the interested region are detected from an image using deep learning. Then, based on the corner points, the region of interest (ROI) is extracted and made proportional to real space by applying warp-perspective transformation. Finally, the objects are detected and mapped to the real-world location. Removing distortion from the image using camera calibration improves the accuracy in most of the cases. The deep learning framework Darknet is used for detection, and necessary modifications are made to integrate perspective transformation, camera calibration, un-distortion, etc. Experiments are performed with two types of cameras, one with barrel and the other with pincushion distortions. The results show that the difference between calculated distances and measured on real space with measurement tapes are very small; approximately 1 cm on an average. Furthermore, automatic corner detection allows the system to be used with any type of camera that has a fixed pose or in motion; using more points significantly enhances the accuracy of real-world mapping even without camera calibration. Perspective transformation also increases the object detection efficiency by making unified sizes of all objects.

단일 자연 영상에서 그림자 검출을 위한 그림자 특징 요소들의 정의와 분석 (Definition and Analysis of Shadow Features for Shadow Detection in Single Natural Image)

  • 박기홍;이양선
    • 디지털콘텐츠학회 논문지
    • /
    • 제19권1호
    • /
    • pp.165-171
    • /
    • 2018
  • 그림자는 자연 영상에서 관찰되는 물리적인 현상으로 지능형 비디오 감시, 교통 감시 및 항공 영상 분석 등과 같은 다양한 영상처리 시스템에 부정적인 영향을 미치는 요소이다. 따라서 그림자의 검출은 컴퓨터 비전의 전 분야에서 전처리 과정으로 고려되어야 한다. 본 논문에서는 참조 영상이 필요 없는 단일 자연 영상에서 그림자 검출을 위한 다양한 특징 요소들을 정의하고 분석하였다. 그림자 요소들은 영상의 밝기, 색도, 조도불변, 색상불변 및 정보의 불확실성을 의미하는 엔트로피 영상 등을 기술하였으며, 분석 결과 색도와 조도불변 영상이 그림자 검출 및 복원에 효과적임을 알 수 있었다. 향후 다양한 그림자 특징 요소들의 퓨전 맵을 정의하고, 다양한 조명 수준에 적응 가능한 그림자 검출 및 색도와 조도불변 영상을 이용한 그림자 제거 연구를 계속하고자 한다.

얼굴 영상에서 유전자 알고리즘 기반 형판정합을 이용한 눈동자 검출 (Detection of Pupil using Template Matching Based on Genetic Algorithm in Facial Images)

  • 이찬희;장경식
    • 한국정보통신학회논문지
    • /
    • 제13권7호
    • /
    • pp.1429-1436
    • /
    • 2009
  • 본 논문에서는 다양한 조명하에서의 단일 얼굴 영상에 대해 유전자 알고리즘과 형판 정합을 이용하여 빠르게 눈동자를 검출하는 방법을 제안한다. 유전 알고리즘을 이용한 기존의 눈동자 검출 방법은 초기 개체군의 위치에 민감하여 낮은 눈 검출율을 보이며, 도한 그 결과가 일관적이지 않은 문제점을 갖는다. 이와 같은 문제점을 해결화기 위해 얼굴영상에서 지역적 최소치를 추출하고 형판과 가장 높은 적합도를 가지는 개체들로 초기 개체군을 생성 하였다. 각각의 개체는 형판의 기하학적 변환 정보로 구성되며, 형판 정합에 의해 눈동자가 검출된다. 실험을 통하여 본 논문에서 제안한 눈 후보 검출을 통하여 단일 영상에서도 눈 검출의 정확도와 높은 검출률을 확인하였다.

Change Detection in Bitemporal Remote Sensing Images by using Feature Fusion and Fuzzy C-Means

  • Wang, Xin;Huang, Jing;Chu, Yanli;Shi, Aiye;Xu, Lizhong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권4호
    • /
    • pp.1714-1729
    • /
    • 2018
  • Change detection of remote sensing images is a profound challenge in the field of remote sensing image analysis. This paper proposes a novel change detection method for bitemporal remote sensing images based on feature fusion and fuzzy c-means (FCM). Different from the state-of-the-art methods that mainly utilize a single image feature for difference image construction, the proposed method investigates the fusion of multiple image features for the task. The subsequent problem is regarded as the difference image classification problem, where a modified fuzzy c-means approach is proposed to analyze the difference image. The proposed method has been validated on real bitemporal remote sensing data sets. Experimental results confirmed the effectiveness of the proposed method.