• Title/Summary/Keyword: Background Edge

Search Result 314, Processing Time 0.024 seconds

Text Region Extraction from Videos using the Harris Corner Detector (해리스 코너 검출기를 이용한 비디오 자막 영역 추출)

  • Kim, Won-Jun;Kim, Chang-Ick
    • Journal of KIISE:Software and Applications
    • /
    • v.34 no.7
    • /
    • pp.646-654
    • /
    • 2007
  • In recent years, the use of text inserted into TV contents has grown to provide viewers with better visual understanding. In this paper, video text is defined as superimposed text region located of the bottom of video. Video text extraction is the first step for video information retrieval and video indexing. Most of video text detection and extraction methods in the previous work are based on text color, contrast between text and background, edge, character filter, and so on. However, the video text extraction has big problems due to low resolution of video and complex background. To solve these problems, we propose a method to extract text from videos using the Harris corner detector. The proposed algorithm consists of four steps: corer map generation using the Harris corner detector, extraction of text candidates considering density of comers, text region determination using labeling, and post-processing. The proposed algorithm is language independent and can be applied to texts with various colors. Text region update between frames is also exploited to reduce the processing time. Experiments are performed on diverse videos to confirm the efficiency of the proposed method.

Moving Object Segmentation using Space-oriented Object Boundary Linking and Background Registration (공간기반 객체 외곽선 연결과 배경 저장을 사용한 움직이는 객체 분할)

  • Lee Ho Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.2
    • /
    • pp.128-139
    • /
    • 2005
  • Moving object boundary is very important for moving object segmentation. But the moving object boundary shows broken boundary We invent a novel space-oriented boundary linking algorithm to link the broken boundary The boundary linking algorithm forms a quadrant around the terminating pixel in the broken boundary and searches forward other terminating pixel to link within a radius. The boundary linking algorithm guarantees shortest distance linking. We also register the background from image sequence. We construct two object masks, one from the result of boundary linking and the other from the registered background, and use these two complementary object masks together for moving object segmentation. We also suppress the moving cast shadow using Roberts gradient operator. The major advantages of the proposed algorithms are more accurate moving object segmentation and the segmentation of the object which has holes in its region using these two object masks. We experiment the algorithms using the standard MPEG-4 test sequences and real video sequence. The proposed algorithms are very efficient and can process QCIF image more than 48 fps and CIF image more than 19 fps using a 2.0GHz Pentium-4 computer.

Implementation of Intelligent Image Surveillance System based Context (컨텍스트 기반의 지능형 영상 감시 시스템 구현에 관한 연구)

  • Moon, Sung-Ryong;Shin, Seong
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.3
    • /
    • pp.11-22
    • /
    • 2010
  • This paper is a study on implementation of intelligent image surveillance system using context information and supplements temporal-spatial constraint, the weak point in which it is hard to process it in real time. In this paper, we propose scene analysis algorithm which can be processed in real time in various environments at low resolution video(320*240) comprised of 30 frames per second. The proposed algorithm gets rid of background and meaningless frame among continuous frames. And, this paper uses wavelet transform and edge histogram to detect shot boundary. Next, representative key-frame in shot boundary is selected by key-frame selection parameter and edge histogram, mathematical morphology are used to detect only motion region. We define each four basic contexts in accordance with angles of feature points by applying vertical and horizontal ratio for the motion region of detected object. These are standing, laying, seating and walking. Finally, we carry out scene analysis by defining simple context model composed with general context and emergency context through estimating each context's connection status and configure a system in order to check real time processing possibility. The proposed system shows the performance of 92.5% in terms of recognition rate for a video of low resolution and processing speed is 0.74 second in average per frame, so that we can check real time processing is possible.

Extraction of Attentive Objects Using Feature Maps (특징 지도를 이용한 중요 객체 추출)

  • Park Ki-Tae;Kim Jong-Hyeok;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.5 s.311
    • /
    • pp.12-21
    • /
    • 2006
  • In this paper, we propose a technique for extracting attentive objects in images using feature maps, regardless of the complexity of images and the position of objects. The proposed method uses feature maps with edge and color information in order to extract attentive objects. We also propose a reference map which is created by integrating feature maps. In order to create a reference map, feature maps which represent visually attentive regions in images are constructed. Three feature maps including edge map, CbCr map and H map are utilized. These maps contain the information about boundary regions by the difference of intensity or colors. Then the combination map which represents the meaningful boundary is created by integrating the reference map and feature maps. Since the combination map simply represents the boundary of objects we extract the candidate object regions including meaningful boundaries from the combination map. In order to extract candidate object regions, we use the convex hull algorithm. By applying a segmentation algorithm to the area of candidate regions to separate object regions and background regions, real object regions are extracted from the candidate object regions. Experiment results show that the proposed method extracts the attentive regions and attentive objects efficiently, with 84.3% Precision rate and 81.3% recall rate.

Object Segmentation for Image Transmission Services and Facial Characteristic Detection based on Knowledge (화상전송 서비스를 위한 객체 분할 및 지식 기반 얼굴 특징 검출)

  • Lim, Chun-Hwan;Yang, Hong-Young
    • Journal of the Korean Institute of Telematics and Electronics T
    • /
    • v.36T no.3
    • /
    • pp.26-31
    • /
    • 1999
  • In this paper, we propose a facial characteristic detection algorithm based on knowledge and object segmentation method for image communication. In this algorithm, under the condition of the same lumination and distance from the fixed video camera to human face, we capture input images of 256 $\times$ 256 of gray scale 256 level and then remove the noise using the Gaussian filter. Two images are captured with a video camera, One contains the human face; the other contains only background region without including a face. And then we get a differential image between two images. After removing noise of the differential image by eroding End dilating, divide background image into a facial image. We separate eyes, ears, a nose and a mouth after searching the edge component in the facial image. From simulation results, we have verified the efficiency of the Proposed algorithm.

  • PDF

Autonomous Battle Tank Detection and Aiming Point Search Using Imagery (영상정보에 기초한 전차 자율탐지 및 조준점탐색 연구)

  • Kim, Jong-Hwan;Jung, Chi-Jung;Heo, Mira
    • Journal of the Korea Society for Simulation
    • /
    • v.27 no.2
    • /
    • pp.1-10
    • /
    • 2018
  • This paper presents an autonomous detection and aiming point computation of a battle tank by using RGB images. Maximally stable extremal regions algorithm was implemented to find features of the tank, which are matched with images extracted from streaming video to figure out the region of interest where the tank is present. The median filter was applied to remove noises in the region of interest and decrease camouflage effects of the tank. For the tank segmentation, k-mean clustering was used to autonomously distinguish the tank from its background. Also, both erosion and dilation algorithms of morphology techniques were applied to extract the tank shape without noises and generate the binary image with 1 for the tank and 0 for the background. After that, Sobel's edge detection was used to measure the outline of the tank by which the aiming point at the center of the tank was calculated. For performance measurement, accuracy, precision, recall, and F-measure were analyzed by confusion matrix, resulting in 91.6%, 90.4%, 85.8%, and 88.1%, respectively.

Facial Image Recognition Based on Wavelet Transform and Neural Networks (웨이브렛 변환과 신경망 기반 얼굴 인식)

  • 임춘환;이상훈;편석범
    • Journal of the Institute of Electronics Engineers of Korea TE
    • /
    • v.37 no.3
    • /
    • pp.104-113
    • /
    • 2000
  • In this study, we propose facial image recognition based on wavelet transform and neural network. This algorithm is proposed by following processes. First, two gray level images is captured in constant illumination and, after removing input image noise using a gaussian filter, differential image is obtained between background and face input image, and this image has a process of erosion and dilation. Second, a mask is made from dilation image and background and facial image is divided by projecting the mask into face input image Then, characteristic area of square shape that consists of eyes, a nose, a mouth, eyebrows and cheeks is detected by searching the edge of divided face image. Finally, after characteristic vectors are extracted from performing discrete wavelet transform(DWT) of this characteristic area and is normalized, normalized vectors become neural network input vectors. And recognition processing is performed based on neural network learning. Simulation results show recognition rate of 100 % about learned image and 92% about unlearned image.

  • PDF

An Algorithim for Converting 2D Face Image into 3D Model (얼굴 2D 이미지의 3D 모델 변환 알고리즘)

  • Choi, Tae-Jun;Lee, Hee-Man
    • Journal of the Korea Society of Computer and Information
    • /
    • v.20 no.4
    • /
    • pp.41-48
    • /
    • 2015
  • Recently, the spread of 3D printers has been increasing the demand for 3D models. However, the creation of 3D models should have a trained specialist using specialized softwares. This paper is about an algorithm to produce a 3D model from a single sheet of two-dimensional front face photograph, so that ordinary people can easily create 3D models. The background and the foreground are separated from a photo and predetermined constant number vertices are placed on the seperated foreground 2D image at a same interval. The arranged vertex location are extended in three dimensions by using the gray level of the pixel on the vertex and the characteristics of eyebrows and nose of the nomal human face. The separating method of the foreground and the background uses the edge information of the silhouette. The AdaBoost algorithm using the Haar-like feature is also employed to find the location of the eyes and nose. The 3D models obtained by using this algorithm are good enough to use for 3D printing even though some manual treatment might be required a little bit. The algorithm will be useful for providing 3D contents in conjunction with the spread of 3D printers.

Facial Contour Extraction in PC Camera Images using Active Contour Models (동적 윤곽선 모델을 이용한 PC 카메라 영상에서의 얼굴 윤곽선 추출)

  • Kim Young-Won;Jun Byung-Hwan
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.11a
    • /
    • pp.633-638
    • /
    • 2005
  • The extraction of a face is a very important part for human interface, biometrics and security. In this paper, we applies DCM(Dilation of Color and Motion) filter and Active Contour Models to extract facial outline. First, DCM filter is made by applying morphology dilation to the combination of facial color image and differential image applied by dilation previously. This filter is used to remove complex background and to detect facial outline. Because Active Contour Models receive a large effect according to initial curves, we calculate rotational degree using geometric ratio of face, eyes and mouth. We use edgeness and intensity as an image energy, in order to extract outline in the area of weak edge. We acquire various head-pose images with both eyes from five persons in inner space with complex background. As an experimental result with total 125 images gathered by 25 per person, it shows that average extraction rate of facial outline is 98.1% and average processing time is 0.2sec.

  • PDF

X-ray Image Processing for the Korea Red Ginseng Inner Hole Detection (II) - Results of inner hole detection - (홍삼 내공검출을 위한 X-선 영상처리기술 (II) - 내공검출결과 -)

  • 손재룡;최규홍;이강진;최동수;김기영
    • Journal of Biosystems Engineering
    • /
    • v.28 no.1
    • /
    • pp.45-52
    • /
    • 2003
  • Red ginsengs are inspected manually by examining those in the dark room with back light illumination. Manual inspection is often influenced by physical condition of inspectors. Sometimes. the best grade, heaven. has some inner holes though it was inspected by a specialist. In order to resolve this problem, this study was performed to develop image processing algorithm to detect the inner holes in the x-ray image of ginseng. Because of little gray value difference between background and ginseng in the image. simple thresholding method was not appropriate. Modified watershed algorithm was used to differentiate the inner holes from background and normal ginseng body. Inner hole edge region detected by watershed algorithm consists of many number of blobs including normal portions. With line profile analysis with scanning one line at a time beginning the starting point. it shelved two peaks both ends representing extracting each blobs. in which setting threshold value as of lower peak value enabled us to obtain inner hole image. Once this procedure has to be done till the finishing point it is completing inner hole detection for one blob. Thus. conducting ail blobs by this procedure is completing inner detection of one whole ginseng. Detection results of the inner holes fer various size of red ginsengs were good even though there was small detection variation. 6.2%. according to position of x-rat tube.