• Title/Summary/Keyword: color segmentation

Search Result 544, Processing Time 0.023 seconds

Automatic segmentation of a tongue area and oriental medicine tongue diagnosis system using the learning of the area features (영역 특징 학습을 이용한 혀의 자동 영역 분리 및 한의학적 설진 시스템)

  • Lee, Min-taek;Lee, Kyu-won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.20 no.4
    • /
    • pp.826-832
    • /
    • 2016
  • In this paper, we propose a tongue diagnosis system for determining the presence of specific taste crack area as a first step in the digital tongue diagnosis system that anyone can use easily without special equipment and expensive digital tongue diagnosis equipment. Training DB was developed by the Haar-like feature, Adaboost learning on the basis of 261 pictures which was collected in Oriental medicine. Tongue candidate regions were detected from the input image by the learning results and calculated the average value of the HUE component to separate only the tongue area in the detected candidate regions. A tongue area is separated through the Connected Component Labeling from the contour of tongue detected. The palate regions were divided by the relative width and height of the tongue regions separated. Image on the taste area is converted to gray image and binarized with each of the average brightness values. A crack in the presence or absence was determined via Connected Component Labeling with binary images.

Development of a Ubiquitous Vision System for Location-awareness of Multiple Targets by a Matching Technique for the Identity of a Target;a New Approach

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hag-Bae
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.68-73
    • /
    • 2005
  • Various techniques have been proposed for detection and tracking of targets in order to develop a real-world computer vision system, e.g., visual surveillance systems, intelligent transport systems (ITSs), and so forth. Especially, the idea of distributed vision system is required to realize these techniques in a wide-spread area. In this paper, we develop a ubiquitous vision system for location-awareness of multiple targets. Here, each vision sensor that the system is composed of can perform exact segmentation for a target by color and motion information, and visual tracking for multiple targets in real-time. We construct the ubiquitous vision system as the multiagent system by regarding each vision sensor as the agent (the vision agent). Therefore, we solve matching problem for the identity of a target as handover by protocol-based approach. We propose the identified contract net (ICN) protocol for the approach. The ICN protocol not only is independent of the number of vision agents but also doesn't need calibration between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. We adapt the ICN protocol in our ubiquitous vision system that we construct in order to make an experiment. Our ubiquitous vision system shows us reliable results and the ICN protocol is successfully operated through several experiments.

  • PDF

Design and Implementation of Automated Detection System of Personal Identification Information for Surgical Video De-Identification (수술 동영상의 비식별화를 위한 개인식별정보 자동 검출 시스템 설계 및 구현)

  • Cho, Youngtak;Ahn, Kiok
    • Convergence Security Journal
    • /
    • v.19 no.5
    • /
    • pp.75-84
    • /
    • 2019
  • Recently, the value of video as an important data of medical information technology is increasing due to the feature of rich clinical information. On the other hand, video is also required to be de-identified as a medical image, but the existing methods are mainly specialized in the stereotyped data and still images, which makes it difficult to apply the existing methods to the video data. In this paper, we propose an automated system to index candidate elements of personal identification information on a frame basis to solve this problem. The proposed system performs indexing process using text and person detection after preprocessing by scene segmentation and color knowledge based method. The generated index information is provided as metadata according to the purpose of use. In order to verify the effectiveness of the proposed system, the indexing speed was measured using prototype implementation and real surgical video. As a result, the work speed was more than twice as fast as the playing time of the input video, and it was confirmed that the decision making was possible through the case of the production of surgical education contents.

Text Region Detection Method in Mobile Phone Video (휴대전화 동영상에서의 문자 영역 검출 방법)

  • Lee, Hoon-Jae;Sull, Sang-Hoon
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.5
    • /
    • pp.192-198
    • /
    • 2010
  • With the popularization of the mobile phone with a built-in camera, there are a lot of effort to provide useful information to users by detecting and recognizing the text in the video which is captured by the camera in mobile phone, and there is a need to detect the text regions in such mobile phone video. In this paper, we propose a method to detect the text regions in the mobile phone video. We employ morphological operation as a preprocessing and obtain binarized image using modified k-means clustering. After that, candidate text regions are obtained by applying connected component analysis and general text characteristic analysis. In addition, we increase the precision of the text detection by examining the frequency of the candidate regions. Experimental results show that the proposed method detects the text regions in the mobile phone video with high precision and recall.

Enhancement of Haze Removal using Transmission Rate Compensation (전달량 보정을 통한 영상의 안개제거 개선)

  • Ahn, Jinu;Cha, Hyung-Tai
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.159-166
    • /
    • 2013
  • In this paper, we propose a transmission rate compensation method to remove a haze of an image by using edge information of a haze image and image segmentation. With a hazed image, it is difficult not only to recognize objects in the image but also to use an image processing method. One of the famous defogging algorithm named 'Dark Channel Prior'(DCP) is used to predict fog transmission rate using dark area of an image, and eliminates fog from the image. But there is a big possibility to calculate a wrong transmission rate if the area of high RGB values is larger than the area of the reference area. Therefore we eliminate color distortion area to calculate transmission rate by using the propose method, and obtain a natural clean image from a hazed image.

Spatiotemporal Saliency-Based Video Summarization on a Smartphone (스마트폰에서의 시공간적 중요도 기반의 비디오 요약)

  • Lee, Won Beom;Williem, Williem;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.18 no.2
    • /
    • pp.185-195
    • /
    • 2013
  • In this paper, we propose a video summarization technique on a smartphone, based on spatiotemporal saliency. The proposed technique detects scene changes by computing the difference of the color histogram, which is robust to camera and object motion. Then the similarity between adjacent frames, face region, and frame saliency are computed to analyze the spatiotemporal saliency in a video clip. Over-segmented hierarchical tree is created using scene changes and is updated iteratively using mergence and maintenance energies computed during the analysis procedure. In the updated hierarchical tree, segmented frames are extracted by applying a greedy algorithm on the node with high saliency when it satisfies the reduction ratio and the minimum interval requested by the user. Experimental result shows that the proposed method summaries a 2 minute-length video in about 10 seconds on a commercial smartphone. The summarization quality is superior to the commercial video editing software, Muvee.

The Slope Extraction and Compensation Based on Adaptive Edge Enhancement to Extract Scene Text Region (장면 텍스트 영역 추출을 위한 적응적 에지 강화 기반의 기울기 검출 및 보정)

  • Back, Jaegyung;Jang, Jaehyuk;Seo, Yeong Geon
    • Journal of Digital Contents Society
    • /
    • v.18 no.4
    • /
    • pp.777-785
    • /
    • 2017
  • In the modern real world, we can extract and recognize some texts to get a lot of information from the scene containing them, so the techniques for extracting and recognizing text areas from a scene are constantly evolving. They can be largely divided into texture-based method, connected component method, and mixture of both. Texture-based method finds and extracts text based on the fact that text and others have different values such as image color and brightness. Connected component method is determined by using the geometrical properties after making similar pixels adjacent to each pixel to the connection element. In this paper, we propose a method to adaptively change to improve the accuracy of text region extraction, detect and correct the slope of the image using edge and image segmentation. The method only extracts the exact area containing the text by correcting the slope of the image, so that the extracting rate is 15% more accurate than MSER and 10% more accurate than EEMSER.

Text extraction from camera based document image (카메라 기반 문서영상에서의 문자 추출)

  • 박희주;김진호
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.8 no.2
    • /
    • pp.14-20
    • /
    • 2003
  • This paper presents a text extraction method of camera based document image. It is more difficult to recognize camera based document image in comparison with scanner based image because of segmentation problem due to variable lighting condition and versatile fonts. Both document binarization and character extraction are important processes to recognize camera based document image. After converting color image into grey level image, gray level normalization is used to extract character region independent of lighting condition and background image. Local adaptive binarization method is then used to extract character from the background after the removal of noise. In this character extraction step, the information of the horizontal and vertical projection and the connected components is used to extract character line, word region and character region. To evaluate the proposed method, we have experimented with documents mixed Hangul, English, symbols and digits of the ETRI database. An encouraging binarization and character extraction results have been obtained.

  • PDF

A Vehicle Classification Method in Thermal Video Sequences using both Shape and Local Features (형태특징과 지역특징 융합기법을 활용한 열영상 기반의 차량 분류 방법)

  • Yang, Dong Won
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.97-105
    • /
    • 2020
  • A thermal imaging sensor receives the radiating energy from the target and the background, so it has been widely used for detection, tracking, and classification of targets at night for military purpose. In recognizing the target automatically using thermal images, if the correct edges of object are used then it can generate the classification results with high accuracy. However since the thermal images have lower spatial resolution and more blurred edges than color images, the accuracy of the classification using thermal images can be decreased. In this paper, to overcome this problem, a new hierarchical classifier using both shape and local features based on the segmentation reliabilities, and the class/pose updating method for vehicle classification are proposed. The proposed classification method was validated using thermal video sequences of more than 20,000 images which include four types of military vehicles - main battle tank, armored personnel carrier, military truck, and estate car. The experiment results showed that the proposed method outperformed the state-of-the-arts methods in classification accuracy.

A Smoke Detection Method based on Video for Early Fire-Alarming System (조기 화재 경보 시스템을 위한 비디오 기반 연기 감지 방법)

  • Truong, Tung X.;Kim, Jong-Myon
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.213-220
    • /
    • 2011
  • This paper proposes an effective, four-stage smoke detection method based on video that provides emergency response in the event of unexpected hazards in early fire-alarming systems. In the first phase, an approximate median method is used to segment moving regions in the present frame of video. In the second phase, a color segmentation of smoke is performed to select candidate smoke regions from these moving regions. In the third phase, a feature extraction algorithm is used to extract five feature parameters of smoke by analyzing characteristics of the candidate smoke regions such as area randomness and motion of smoke. In the fourth phase, extracted five parameters of smoke are used as an input for a K-nearest neighbor (KNN) algorithm to identify whether the candidate smoke regions are smoke or non-smoke. Experimental results indicate that the proposed four-stage smoke detection method outperforms other algorithms in terms of smoke detection, providing a low false alarm rate and high reliability in open and large spaces.