• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.03 seconds

Motion-Based Background Image Extraction for Traffic Environment Analysis (교통 환경 분석을 위한 움직임 기반 배경영상 추출)

  • Oh, Jeong-Su
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.8
    • /
    • pp.1919-1925
    • /
    • 2013
  • This paper proposes a background image extraction algorithm for traffic environment analysis in a school zone. The proposed algorithm solves the problems by level changes and stationary objects to be occurred frequently in traffic environment. For the former, it renews rapidly the background image toward the current frame using a fast Sima-Delta algorithm and for the latter, it excludes the stationary objects from the background image by detecting dynamic regions using a just previous frame and a background image averaged for a long time. The results of experiments show that the proposed algorithm adapts quickly itself to level change well, and reduces about 40~80% of SAD in background region in comparison with the conventional algorithms.

Detection of Surface Cracks in Eggshell by Machine Vision and Artificial Neural Network (기계 시각과 인공 신경망을 이용한 파란의 판별)

  • 이수환;조한근;최완규
    • Journal of Biosystems Engineering
    • /
    • v.25 no.5
    • /
    • pp.409-414
    • /
    • 2000
  • A machine vision system was built to obtain single stationary image from an egg. This system includes a CCD camera, an image processing board and a lighting system. A computer program was written to acquire, enhance and get histogram from an image. To minimize the evaluation time, the artificial neural network with the histogram of the image was used for eggshell evaluation. Various artificial neural networks with different parameters were trained and tested. The best network(64-50-1 and 128-10-1) showed an accuracy of 87.5% in evaluating eggshell. The comparison test for the elapsed processing time per an egg spent by this method(image processing and artificial neural network) and by the processing time per an egg spent by this method(image processing and artificial neural network) and by the previous method(image processing only) revealed that it was reduced to about a half(5.5s from 10.6s) in case of cracked eggs and was reduced to about one-fifth(5.5s from 21.1s) in case of normal eggs. This indicates that a fast eggshell evaluation system can be developed by using machine vision and artificial neural network.

  • PDF

A Development of Video Tracking System on Real Time Using MBR (MBR을 이용한 실시간 영상추적 시스템 개발)

  • Kim, Hee-Sook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.7 no.6
    • /
    • pp.1243-1248
    • /
    • 2006
  • Object tracking in a real time image is one of interesting subjects in computer vision and many practical application fields past couple of years. But sometimes existing systems cannot find object by recognize background noise as object. This paper proposes a method of object detection and tracking using adaptive background image in real time. To detect object which does not influenced by illumination and remove noise in background image, this system generates adaptive background image by real time background image updating. This system detects object using the difference between background image and input image from camera. After setting up MBR(minimum bounding rectangle) using the internal point of detected object, the system tracks object through this MBR. In addition, this paper evaluates the test result about performance of proposed method as compared with existing tracking algorithm.

  • PDF

Modified Seam Finding Algorithm based on Saliency Map to Generate 360 VR Image (360 VR 영상 제작을 위한 Saliency Map 기반 Seam Finding 알고리즘)

  • Han, Hyeon-Deok;Han, Jong-Ki
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1096-1112
    • /
    • 2019
  • The cameras generating 360 VR image are too expensive to be used publically. To overcome this problem, we propose a way to use smart phones instead of VR camera, where more than 100 pictures are taken by smart phone and are stitched into a 360 VR image. In this scenario, when moving objects are in some of the pictures, the stitched 360 VR image has various degradations, for example, ghost effect and mis-aligning. In this paper, we proposed an algorithm to modify the seam finding algorithms, where the saliency map in ROI is generated to check whether the pixel belongs to visually salient objects or not. Various simulation results show that the proposed algorithm is effective to increase the quality of the generated 360 VR image.

Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture

  • Kim, Beomjun;Heo, Daerak;Moon, Woonchan;Hahn, Joonku
    • Current Optics and Photonics
    • /
    • v.5 no.5
    • /
    • pp.514-523
    • /
    • 2021
  • Methods for absolute depth estimation have received lots of interest, and most algorithms are concerned about how to minimize the difference between an input defocused image and an estimated defocused image. These approaches may increase the complexity of the algorithms to calculate the defocused image from the estimation of the focused image. In this paper, we present a new method to recover depth of scene based on a sharpness-assessment algorithm. The proposed algorithm estimates the depth of scene by calculating the sharpness of deconvolved images with a specific point-spread function (PSF). While most depth estimation studies evaluate depth of the scene only behind a focal plane, the proposed method evaluates a broad depth range both nearer and farther than the focal plane. This is accomplished using an asymmetric aperture, so the PSF at a position nearer than the focal plane is different from that at a position farther than the focal plane. From the image taken with a focal plane of 160 cm, the depth of object over the broad range from 60 to 350 cm is estimated at 10 cm resolution. With an asymmetric aperture, we demonstrate the feasibility of the sharpness-assessment algorithm to recover absolute depth of scene from a single defocused image.

Satellite Image Resolution Enhancement Technique using Diagonal Information of Image (영상의 대각선 정보를 이용한 위성영상 해상도 향상 기법)

  • Choi, SeokWeon;Jeong, JaeHeon;Seo, DooChun;Lee, DongHan
    • Journal of Space Technology and Applications
    • /
    • v.1 no.1
    • /
    • pp.41-48
    • /
    • 2021
  • In this paper, we will discuss techniques that can increase the resolution by 1.4 times without distortion or performance degradation of the original image, using diagonal information of the image. The applied method is to use the image information of 4 adjacent points without actual rotating the image by 45 degrees and enlarge and rearrange it according to the characteristics of the camera, so that the same physical concept as the actual 45 degrees can be applied. This is a concrete realization method that can improve the resolution by 1.4 times without deterioration of performance and a demonstration of this result.

An Explainable Deep Learning-Based Classification Method for Facial Image Quality Assessment

  • Kuldeep Gurjar;Surjeet Kumar;Arnav Bhavsar;Kotiba Hamad;Yang-Sae Moon;Dae Ho Yoon
    • Journal of Information Processing Systems
    • /
    • v.20 no.4
    • /
    • pp.558-573
    • /
    • 2024
  • Considering factors such as illumination, camera quality variations, and background-specific variations, identifying a face using a smartphone-based facial image capture application is challenging. Face Image Quality Assessment refers to the process of taking a face image as input and producing some form of "quality" estimate as an output. Typically, quality assessment techniques use deep learning methods to categorize images. The models used in deep learning are shown as black boxes. This raises the question of the trustworthiness of the models. Several explainability techniques have gained importance in building this trust. Explainability techniques provide visual evidence of the active regions within an image on which the deep learning model makes a prediction. Here, we developed a technique for reliable prediction of facial images before medical analysis and security operations. A combination of gradient-weighted class activation mapping and local interpretable model-agnostic explanations were used to explain the model. This approach has been implemented in the preselection of facial images for skin feature extraction, which is important in critical medical science applications. We demonstrate that the use of combined explanations provides better visual explanations for the model, where both the saliency map and perturbation-based explainability techniques verify predictions.

Implementation of a Self Controlled Mobile Robot with Intelligence to Recognize Obstacles (장애물 인식 지능을 갖춘 자율 이동로봇의 구현)

  • 류한성;최중경
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.40 no.5
    • /
    • pp.312-321
    • /
    • 2003
  • In this paper, we implement robot which are ability to recognize obstacles and moving automatically to destination. we present two results in this paper; hardware implementation of image processing board and software implementation of visual feedback algorithm for a self-controlled robot. In the first part, the mobile robot depends on commands from a control board which is doing image processing part. We have studied the self controlled mobile robot system equipped with a CCD camera for a long time. This robot system consists of a image processing board implemented with DSPs, a stepping motor, a CCD camera. We will propose an algorithm in which commands are delivered for the robot to move in the planned path. The distance that the robot is supposed to move is calculated on the basis of the absolute coordinate and the coordinate of the target spot. And the image signal acquired by the CCD camera mounted on the robot is captured at every sampling time in order for the robot to automatically avoid the obstacle and finally to reach the destination. The image processing board consists of DSP (TMS320VC33), ADV611, SAA7111, ADV7l76A, CPLD(EPM7256ATC144), and SRAM memories. In the second part, the visual feedback control has two types of vision algorithms: obstacle avoidance and path planning. The first algorithm is cell, part of the image divided by blob analysis. We will do image preprocessing to improve the input image. This image preprocessing consists of filtering, edge detection, NOR converting, and threshold-ing. This major image processing includes labeling, segmentation, and pixel density calculation. In the second algorithm, after an image frame went through preprocessing (edge detection, converting, thresholding), the histogram is measured vertically (the y-axis direction). Then, the binary histogram of the image shows waveforms with only black and white variations. Here we use the fact that since obstacles appear as sectional diagrams as if they were walls, there is no variation in the histogram. The intensities of the line histogram are measured as vertically at intervals of 20 pixels. So, we can find uniform and nonuniform regions of the waveforms and define the period of uniform waveforms as an obstacle region. We can see that the algorithm is very useful for the robot to move avoiding obstacles.

A Novel Segment Extraction and Stereo Matching Technique using Color, Motion and Initial Depth from Depth Camera (컬러, 움직임 정보 및 깊이 카메라 초기 깊이를 이용한 분할 영역 추출 및 스테레오 정합 기법)

  • Um, Gi-Mun;Park, Ji-Min;Bang, Gun;Cheong, Won-Sik;Hur, Nam-Ho;Kim, Jin-Woong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.12C
    • /
    • pp.1147-1153
    • /
    • 2009
  • We propose a novel image segmentation and segment-based stereo matching technique using color, depth, and motion information. Proposed technique firstly splits reference images into foreground region or background region using depth information from depth camera. Then each region is segmented into small segments with color information. Moreover, extracted segments in current frame are tracked in the next frame in order to maintain depth consistency between frames. The initial depth from the depth camera is also used to set the depth search range for stereo matching. Proposed segment-based stereo matching technique was compared with conventional one without foreground and background separation and other conventional one without motion tracking of segments. Simulation results showed that the improvement of segment extraction and depth estimation consistencies by proposed technique compared to conventional ones especially at the static background region.

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.