• Title/Summary/Keyword: 배경 에지

Search Result 172, Processing Time 0.026 seconds

Two-step Boundary Extraction Algorithm with Model (모델 정보를 이용한 2단계 윤곽선 추출 기법)

  • Choe, Hae-Cheol;Lee, Jin-Seong;Jo, Ju-Hyeon;Sin, Ho-Cheol;Kim, Seung-Dae
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.1
    • /
    • pp.49-60
    • /
    • 2002
  • We propose an algorithm for extracting the boundary of a desired object with shape information obtained from sample images. Considering global shape obtained from sample images and edge orientation as well as edge magnitude, the Proposed method composed of two steps finds the boundary of an object. The first step is the approximate segmentation that extracts a rough boundary with a probability map and an edge map. And the second step is the detailed segmentation for finding more accurate boundary based on the SEEL (seed-point extraction and edge linking) algorithm. The experiment results using IR images show robustness to low-quality image and better performance than conventional segmentation methods.

Detecting and Tracking Vehicles at Local Region by using Segmented Regions Information (분할 영역 정보를 이용한 국부 영역에서 차량 검지 및 추적)

  • Lee, Dae-Ho;Park, Young-Tae
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.10
    • /
    • pp.929-936
    • /
    • 2007
  • The novel vision-based scheme for real-time extracting traffic parameters is proposed in this paper. Detecting and tracking of vehicle is processed at local region installed by operator. Local region is divided to segmented regions by edge and frame difference, and the segmented regions are classified into vehicle, road, shadow and headlight by statistical and geometrical features. Vehicle is detected by the result of the classification. Traffic parameters such as velocity, length, occupancy and distance are estimated by tracking using template matching at local region. Because background image are not used, it is possible to utilize under various conditions such as weather, time slots and locations. It is performed well with 90.16% detection rate in various databases. If direction, angle and iris are fitted to operating conditions, we are looking forward to using as the core of traffic monitoring systems.

Character Region Detection Using Structural Features of Hangul & English Characters in Natural Image (자연영상에서 한글 및 영문자의 구조적 특징을 이용한 문자영역 검출)

  • Oh, Myoung-Kwan;Park, Jong-Cheon
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.15 no.3
    • /
    • pp.1718-1723
    • /
    • 2014
  • We proposes the method to detect the Hangul and English character region from natural image using structural feature of Hangul and English Characters. First, we extract edge features from natural image, Next, if features are not corresponding to the heuristic rule of character features, extracted features filtered out and select candidates of character region. Next, candidates of Hangul character region are merged into one Hangul character using Hangul character merging algorithm. Finally, we detect the final character region by Hangul character class decision algorithm. English character region detected by edge features of English characters. Experimental result, proposed method could detect a character region effectively in images that contains a complex background and various environments. As a result of the performance evaluation, A proposed method showed advanced results about detection of Hangul and English characters region from natural image.

Stereoscopic Conversion of Monoscopic Video using Edge Direction Histogram (에지 방향성 히스토그램을 이용한 2차원 동영상의 3차원 입체변환기법)

  • Kim, Jee-Hong;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.8C
    • /
    • pp.782-789
    • /
    • 2009
  • In this paper, we propose an algorithm for creating stereoscopic video from a monoscopic video. Parallel straight lines in a 3D space get narrower as they are farther from the perspective images on a 2D plane and finally meet at one point that is called a vanishing point. A viewer uses depth perception clues called a vanishing point which is the farthest from a viewer's viewpoint in order to perceive depth information from objects and surroundings thereof to the viewer. The viewer estimates the vanishing point with geometrical features in monoscopic images, and can perceive the depth information with the relationship between the position of the vanishing point and the viewer's viewpoint. In this paper, we propose a method to estimate a vanishing point with edge direction histogram in a general monoscopic image and to create a depth map depending on the position of the vanishing point. With the conversion method proposed through the experimental results, it is seen that stable stereoscopic conversion of a given monoscopic video is achieved.

A Scheme of Extracting Forward Vehicle Area Using the Acquired Lane and Road Area Information (차선과 도로영역 정보를 이용한 전방 차량 영역의 추출 기법)

  • Yu, Jae-Hyung;Han, Young-Joon;Hahn, Hern-Soo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.6
    • /
    • pp.797-807
    • /
    • 2008
  • This paper proposes a new algorithm of extracting forward vehicle areas using the acquired lanes and road area information on road images with complex background to improve the efficiency of the vehicle detection. In the first stage, lanes are detected by taking into account the connectivity among the edges which are determined from a method of chain code. Once the lanes proceeding to the same direction with the running vehicle are detected, neighborhood roadways are found from the width and vanishing point of the acquired roadway of the running vehicle. And finally, vehicle areas, where forward vehicles are located on the road area including the center and neighborhood roadways, are extracted. Therefore, the proposed scheme of extracting forward vehicle area improves the rate of vehicle detection on the road images with complex background, and is highly efficient because of detecting vehicles within the confines of the acquired vehicle area. The superiority of the proposed algorithm is verified from experiments of the vehicle detection on road images with complex background.

Small Target Detection Using Bilateral Filter Based on Edge Component (에지 성분에 기초한 양방향 필터 (Bilateral Filter)를 이용한 소형 표적 검출)

  • Bae, Tae-Wuk;Kim, Byoung-Ik;Lee, Sung-Hak;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.9C
    • /
    • pp.863-870
    • /
    • 2009
  • Bilateral filter (BF) is a nonlinear filter for sharpness enhancement and noise removal. The BF performs the function by the two Gaussian filters, the domain filter and the range filter. To apply the BF to infrared (IR) small target detection, the standard deviation of the two Gaussian filters need to be changed adaptively between the background region and the target region. This paper presents a new BF with the adaptive standard deviation based on the analysis of the edge component of the local window, also having the variable filter size. This enables the BF to perform better and become more suitable in the field of small target detection Experimental results demonstrate that the proposed method is robust and efficient than the conventional methods.

Localizing Head and Shoulder Line Using Statistical Learning (통계학적 학습을 이용한 머리와 어깨선의 위치 찾기)

  • Kwon, Mu-Sik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.32 no.2C
    • /
    • pp.141-149
    • /
    • 2007
  • Associating the shoulder line with head location of the human body is useful in verifying, localizing and tracking persons in an image. Since the head line and the shoulder line, what we call ${\Omega}$-shape, move together in a consistent way within a limited range of deformation, we can build a statistical shape model using Active Shape Model (ASM). However, when the conventional ASM is applied to ${\Omega}$-shape fitting, it is very sensitive to background edges and clutter because it relies only on the local edge or gradient. Even though appearance is a good alternative feature for matching the target object to image, it is difficult to learn the appearance of the ${\Omega}$-shape because of the significant difference between people's skin, hair and clothes, and because appearance does not remain the same throughout the entire video. Therefore, instead of teaming appearance or updating appearance as it changes, we model the discriminative appearance where each pixel is classified into head, torso and background classes, and update the classifier to obtain the appropriate discriminative appearance in the current frame. Accordingly, we make use of two features in fitting ${\Omega}$-shape, edge gradient which is used for localization, and discriminative appearance which contributes to stability of the tracker. The simulation results show that the proposed method is very robust to pose change, occlusion, and illumination change in tracking the head and shoulder line of people. Another advantage is that the proposed method operates in real time.

Facial Image Recognition Based on Wavelet Transform and Neural Networks (웨이브렛 변환과 신경망 기반 얼굴 인식)

  • 임춘환;이상훈;편석범
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.104-113
    • /
    • 2000
  • In this study, we propose facial image recognition based on wavelet transform and neural network. This algorithm is proposed by following processes. First, two gray level images is captured in constant illumination and, after removing input image noise using a gaussian filter, differential image is obtained between background and face input image, and this image has a process of erosion and dilation. Second, a mask is made from dilation image and background and facial image is divided by projecting the mask into face input image Then, characteristic area of square shape that consists of eyes, a nose, a mouth, eyebrows and cheeks is detected by searching the edge of divided face image. Finally, after characteristic vectors are extracted from performing discrete wavelet transform(DWT) of this characteristic area and is normalized, normalized vectors become neural network input vectors. And recognition processing is performed based on neural network learning. Simulation results show recognition rate of 100 % about learned image and 92% about unlearned image.

  • PDF

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.

Vehicle Detection and Tracking Using Difference Frame Image for Traffic Measurement System (교통량 측정 시스템에서의 프레임간 차영상을 이용한 차량 검출 및 추적)

  • Kim, Hyung-Soo;Hwang, Gi-Hyeon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.17 no.1
    • /
    • pp.32-39
    • /
    • 2016
  • Intelligent Transport Systems (Intelligent Transportation System: ITS) is a system for inducing a flow of ideal car for using the most advanced technology, it is determined the status of the road, and take appropriate action. In order to be measured at various time points, and is managed, the information about the traffic situation is used image using a computer mainly. The image processing using a computer, it is an easy way to collect parameters of the various traffic in real time, technology has developed more and more. Vehicle detection of transport parameters of intelligent transportation system is a very important technology basically. Therefore, technology detection method using car background images and the contour line extraction method using an edge is used, however, problems have been raised on the accuracy of the detection rate.