• Title/Summary/Keyword: Image Edge

Search Result 2,465, Processing Time 0.027 seconds

Three-Dimensional Reconselction using the Dense Correspondences from Sequence Images (연속된 영상으로부터 조밀한 대응점을 이용한 3차원 재구성)

  • Seo Yung-Ho;Kim Sang-Hoon;Choi Jong-Soo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.30 no.8C
    • /
    • pp.775-782
    • /
    • 2005
  • In case of 3D reconstruction from dense data in uncalibrated sequence images, we encounter with the problem for searching many correspondences and the computational costs. In this paper, we propose a key frame selection method from uncalibrated images and the effective 3D reconstruction method using the key frames. Namely, it can be performed on smaller number of views in the image sequence. We extract correspondences from selected key frames in image sequences. From the extracted correspondences, camera calibration process will be done. We use the edge image to fed dense correspondences between selected key frames. The method we propose to find dense correspondences can be used for recovering the 3D structure of the scene more efficiently.

An efficient Video Dehazing Algorithm Based on Spectral Clustering

  • Zhao, Fan;Yao, Zao;Song, Xiaofang;Yao, Yi
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.7
    • /
    • pp.3239-3267
    • /
    • 2018
  • Image and video dehazing is a popular topic in the field of computer vision and digital image processing. A fast, optimized dehazing algorithm was recently proposed that enhances contrast and reduces flickering artifacts in a dehazed video sequence by minimizing a cost function that makes transmission values spatially and temporally coherent. However, its fixed-size block partitioning leads to block effects. The temporal cost function also suffers from the temporal non-coherence of newly appearing objects in a scene. Further, the weak edges in a hazy image are not addressed. Hence, a video dehazing algorithm based on well designed spectral clustering is proposed. To avoid block artifacts, the spectral clustering is customized to segment static scenes to ensure the same target has the same transmission value. Assuming that edge images dehazed with optimized transmission values have richer detail than before restoration, an edge intensity function is added to the spatial consistency cost model. Atmospheric light is estimated using a modified quadtree search. Different temporal transmission models are established for newly appearing objects, static backgrounds, and moving objects. The experimental results demonstrate that the new method provides higher dehazing quality and lower time complexity than the previous technique.

The Development of a Marker Detection Algorithm for Improving a Lighting Environment and Occlusion Problem of an Augmented Reality (증강현실 시스템의 조명환경과 가림현상 문제를 개선한 마커 검출 알고리즘 개발)

  • Lee, Gyeong Ho;Kim, Young Seop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.11 no.1
    • /
    • pp.79-83
    • /
    • 2012
  • We use adaptive method and determine threshold coefficient so that the algorithm could decide a suitable binarization threshold coefficient of the image to detecting a marker; therefore, we solve the light influence on the shadow area and dark region. In order to improve the speed for reducing computation we created Integral Image. The algorithm detects an outline of the image by using canny edge detection for getting damage or obscured markers as it receives the noise removed picture. The strength of the line of the outline is extracted by Hough transform and it extracts the candidate regions corresponding to the coordinates of the corners. Markers extracted using the equation of a straight edge to find the coordinates. By using the equation of straight the algorithm finds the coordinates the corners. of extracted markers. As a result, even if all corners are obscured, the algorithm can find all of them and this was proved through the experiment.

A Study on Development of Laser Welding System for Bellows Outside Ege Using Vision Sensor (시각센서를 이용한 벨로우즈 외부 모서리 레이저 용접 시스템의 개발에 관한 연구)

  • 이승기;유중돈;나석주
    • Journal of Welding and Joining
    • /
    • v.17 no.3
    • /
    • pp.71-78
    • /
    • 1999
  • The welded metal bellows is commonly manufactured by welding pairs of washer-shaped discs of thin sheet metal stamped from strip stock in thickness from 0.025 to 0.254 mm. The discs, or diaphragms, are formed with mating circumferential corrugations. In this study, the diaphragms were welded by using a CW Nd: YAG laser to form metal bellows. The bellows was fixed on a jig and compressed axially, while Cu-rings were installed between belows edges for intimate contact of edges. The difference between the inner diameter of bellows and jig shaft causes an eccentricity, while the tolerance between motor shaft and jig shaft causes a wobble type motion. A vision sensor which is based on the optical triangulation was used for seam tracking. An image processing algorithm which can distinguish the image by bellows edge from that by Cu-ring was developed. The geometric relationship which describes the eccentricity and wobble type motion was modeled. The seam tracking using the image processing algorithm and the geometric modeling was performed successfully.

  • PDF

Omni-directional Surveillance and Motion Detection using a Fish-Eye Lens (어안 렌즈를 이용한 전방향 감시 및 움직임 검출)

  • Cho, Seog-Bin;Yi, Un-Kun;Baek, Kwang-Ryul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.5 s.305
    • /
    • pp.79-84
    • /
    • 2005
  • In this paper, we developed an omni-directional surveillance and motion detection method. The fish-eye lens provides a wide field of view image. Using this image, the equi-distance model for the fish-eye lens is applied to get the perspective and panorama images. Generally, we must consider the trade-off between resolution and field of view of an image from a camera. To enhance the resolution of the result images, some kind of interpolation methods are applied. Also the moving edge method is used to detect moving objects for the object tracking.

A Study on Measurement of Repetitive Work using Digital Image Processing (영상처리를 이용한 반복적 작업의 측정에 관한 연구)

  • Lee, Jeong-Cheol;Sim, Eok-Su;Kim, Nam-Joo;Park, Chan-Kwon;Park, Jin-Woo
    • IE interfaces
    • /
    • v.14 no.1
    • /
    • pp.95-105
    • /
    • 2001
  • Previous work measurement methods need much time and effort of time study analysts because they have to measure required time through direct observations. In this study, we propose a method which efficiently measures standard times without involvement of human analysts using digital image processing techniques. This method consists of two main steps: motion representation step and cycle segmentation step. In motion representation step, we first detect the motion of any object distinct from its background by differencing two consecutive images separated by a constant time interval. The images thus obtained then pass through an edge detector filter. Finally, the mean values of coordinates of significant pixels of the edge image are obtained. Through these processes, the motions of the observed worker are represented by two time series data of worker location in horizontal and vertical axes. In the second step, called the cycle segmentation step, we extract the frames which have maximum or minimum coordinates in one cycle and store them in a stack, and calculate each cycle time using these frames. In this step we also consider methods on how to detect work delays due to unexpected events such as operator's escapement from the work area, or interruptions. To condude, the experimental results show that the proposed method is very cost-effective and useful for measuring time standards for various work environment.

  • PDF

Finger Vein Recognition Based on Multi-Orientation Weighted Symmetric Local Graph Structure

  • Dong, Song;Yang, Jucheng;Chen, Yarui;Wang, Chao;Zhang, Xiaoyuan;Park, Dong Sun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4126-4142
    • /
    • 2015
  • Finger vein recognition is a biometric technology using finger veins to authenticate a person, and due to its high degree of uniqueness, liveness, and safety, it is widely used. The traditional Symmetric Local Graph Structure (SLGS) method only considers the relationship between the image pixels as a dominating set, and uses the relevant theories to tap image features. In order to better extract finger vein features, taking into account location information and direction information between the pixels of the image, this paper presents a novel finger vein feature extraction method, Multi-Orientation Weighted Symmetric Local Graph Structure (MOW-SLGS), which assigns weight to each edge according to the positional relationship between the edge and the target pixel. In addition, we use the Extreme Learning Machine (ELM) classifier to train and classify the vein feature extracted by the MOW-SLGS method. Experiments show that the proposed method has better performance than traditional methods.

Sub-Frame Analysis-based Object Detection for Real-Time Video Surveillance

  • Jang, Bum-Suk;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.76-85
    • /
    • 2019
  • We introduce a vision-based object detection method for real-time video surveillance system in low-end edge computing environments. Recently, the accuracy of object detection has been improved due to the performance of approaches based on deep learning algorithm such as Region Convolutional Neural Network(R-CNN) which has two stage for inferencing. On the other hand, one stage detection algorithms such as single-shot detection (SSD) and you only look once (YOLO) have been developed at the expense of some accuracy and can be used for real-time systems. However, high-performance hardware such as General-Purpose computing on Graphics Processing Unit(GPGPU) is required to still achieve excellent object detection performance and speed. To address hardware requirement that is burdensome to low-end edge computing environments, We propose sub-frame analysis method for the object detection. In specific, We divide a whole image frame into smaller ones then inference them on Convolutional Neural Network (CNN) based image detection network, which is much faster than conventional network designed forfull frame image. We reduced its computationalrequirementsignificantly without losing throughput and object detection accuracy with the proposed method.

A Study on Modified Switching Filter for Edge Preservation in Mixed Noise Environments (복합잡음 환경에서 에지 보존을 위한 변형된 스위칭 필터에 관한 연구)

  • Kwon, Se-Ik;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.393-396
    • /
    • 2016
  • Digital image processing is the technical area of processing and analysis with intellectual and efficient ways, which has been commercialized in a variety of applications. However, the noise is occurred in the image data with multiple reasons and various studies have been performed to eliminate the noise. Generally, the types of noise vary by causes and forms, and composite noise is the representative one. Hence, the modified switching filter to process by types of noise was suggested to eliminate composite noise in the image effectively and to have excellent characteristics of edge conservation.

  • PDF

A Double-channel Four-band True Color Night Vision System

  • Jiang, Yunfeng;Wu, Dongsheng;Liu, Jie;Tian, Kuo;Wang, Dan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.608-618
    • /
    • 2022
  • By analyzing the signal-to-noise ratio (SNR) theory of the conventional true color night vision system, we found that the output image SNR is limited by the wavelength range of the system response λ1 and λ2. Therefore, we built a double-channel four-band true color night vision system to expand the system response to improve the output image SNR. In the meantime, we proposed an image fusion method based on principal component analysis (PCA) and nonsubsampled shearlet transform (NSST) to obtain the true color night vision images. Through experiments, a method based on edge extraction of the targets and spatial dimension decorrelation was proposed to calculate the SNR of the obtained images and we calculated the correlation coefficient (CC) between the edge graphs of obtained and reference images. The results showed that the SNR of the images of four scenes obtained by our system were 125.0%, 145.8%, 86.0% and 51.8% higher, respectively, than that of the conventional tri-band system and CC was also higher, which demonstrated that our system can get true color images with better quality.