• Title/Summary/Keyword: histogram-based segmentation

Search Result 122, Processing Time 0.026 seconds

A Computational Improvement of Otsu's Algorithm by Estimating Approximate Threshold (근사 임계값 추정을 통한 Otsu 알고리즘의 연산량 개선)

  • Lee, Youngwoo;Kim, Jin Heon
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.163-169
    • /
    • 2017
  • There are various algorithms evaluating a threshold for image segmentation. Among them, Otsu's algorithm sets a threshold based on the histogram. It finds the between-class variance for all over gray levels and then sets the largest one as Otsu's optimal threshold, so we can see that Otsu's algorithm requires a lot of the computation. In this paper, we improved the amount of computational needs by using estimated Otsu's threshold rather than computing for all the threshold candidates. The proposed algorithm is compared with the original one in computation amount and accuracy. we confirm that the proposed algorithm is about 29 times faster than conventional method on single processor and about 4 times faster than on parallel processing architecture machine.

The Moving Object Segmentation By Using Multistage Merging (다단계 결합을 이용한 이동 물체 분리 알고리즘에 관한 연구)

  • 안용학;이정헌;채옥삼
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.21 no.10
    • /
    • pp.2552-2562
    • /
    • 1996
  • In this paper, we propose a segmentation algorithm that can reliably separate moving objects from noisy background in the image sequance received from a camera at the fixed position. The proposed algorithm consists of three processes:generation of the difference image between the input image and the reference image, multilevel quantization of the difference image, and multistagemerging in the quantized image. The quantization process requantizes the difference image based on the multiple threshold values determined bythe histogram analysis. The merging starts from the seed region which created by using the highest threshold value and ends when termination conditions are met. the proposed method has been tested with various real imge sequances containing intruders. The test results show that the proposed algorithm can detect moving objects like intruders very effectively in the noisy environment.

  • PDF

Pulmonary Vessels Segmentation and Refinement On the Chest CT Images (흉부 CT 영상에서 폐 혈관 분할 및 정제)

  • Kim, Jung-Chul;Cho, Joon-Ho;Hwang, Hyung-Soo
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.11
    • /
    • pp.188-194
    • /
    • 2013
  • In this paper, we proposed a new method for pulmonary vessels image segmentation and refinement from pulmonary image. Proposed method consist of following five steps. First, threshold estimation is performed by polynomial regression analysis of histogram variation rate of the pulmonary image. Second, segmentation of pulmonary vessels object is performed by density-based segmentation method based on estimated threshold in first step. Third, 2D connected component labeling method is applied to segmented pulmonary vessels. The seed point of both side diaphragms is determined by eccentricity and size of component. Fourth step is diaphragm extraction by 3D region growing method at the determined seed point. Finally, noise cancelation of pulmonary vessels image is performed by 3D connected component labeling method. The experimental result is showed accurately pulmonary vessels image segmentation, the diaphragm extraction and the noise cancelation of the pulmonary vessels image.

An Illumination and Background-Robust Hand Image Segmentation Method Based on the Dynamic Threshold Values (조명과 배경에 강인한 동적 임계값 기반 손 영상 분할 기법)

  • Na, Min-Young;Kim, Hyun-Jung;Kim, Tae-Young
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.5
    • /
    • pp.607-613
    • /
    • 2011
  • In this paper, we propose a hand image segmentation method using the dynamic threshold values on input images with various lighting and background attributes. First, a moving hand silhouette is extracted using the camera input difference images, Next, based on the R,G,B histogram analysis of the extracted hand silhouette area, the threshold interval for each R, G, and B is calculated on run-time. Finally, the hand area is segmented using the thresholding and then a morphology operation, a connected component analysis and a flood-fill operation are performed for the noise removal. Experimental results on various input images showed that our hand segmentation method provides high level of accuracy and relatively fast stable results without the need of the fixed threshold values. Proposed methods can be used in the user interface of mixed reality applications.

Multi-Object Detection Using Image Segmentation and Salient Points (영상 분할 및 주요 특징 점을 이용한 다중 객체 검출)

  • Lee, Jeong-Ho;Kim, Ji-Hun;Moon, Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.2
    • /
    • pp.48-55
    • /
    • 2008
  • In this paper we propose a novel method for image retrieval system using image segmentation and salient points. The proposed method consists of four steps. In the first step, images are segmented into several regions by JSEG algorithm. In the second step, for the segmented regions, dominant colors and the corresponding color histogram are constructed. By using dominant colors and color histogram, we identify candidate regions where objects may exist. In the third step, real object regions are detected from candidate regions by SIFT matching. In the final step, we measure the similarity between the query image and DB image by using the color correlogram technique. Color correlogram is computed in the query image and object region of DB image. By experimental results, it has been shown that the proposed method detects multi-object very well and it provides better retrieval performance compared with object-based retrieval systems.

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

Video Object Extraction Using Contour Information (윤곽선 정보를 이용한 동영상에서의 객체 추출)

  • Kim, Jae-Kwang;Lee, Jae-Ho;Kim, Chang-Ick
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.1
    • /
    • pp.33-45
    • /
    • 2011
  • In this paper, we present a method for extracting video objects efficiently by using the modified graph cut algorithm based on contour information. First, we extract objects at the first frame by an automatic object extraction algorithm or the user interaction. To estimate the objects' contours at the current frame, motion information of objects' contour in the previous frame is analyzed. Block-based histogram back-projection is conducted along the estimated contour point. Each color model of objects and background can be generated from back-projection images. The probabilities of links between neighboring pixels are decided by the logarithmic based distance transform map obtained from the estimated contour image. Energy of the graph is defined by predefined color models and logarithmic distance transform map. Finally, the object is extracted by minimizing the energy. Experimental results of various test images show that our algorithm works more accurately than other methods.

Shot Boundary Detection of Video Sequence Using Hierarchical Hidden Markov Models (계층적 은닉 마코프 모델을 이용한 비디오 시퀀스의 셧 경계 검출)

  • Park, Jong-Hyun;Cho, Wan-Hyun;Park, Soon-Young
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.27 no.8A
    • /
    • pp.786-795
    • /
    • 2002
  • In this paper, we present a histogram and moment-based vidoe scencd change detection technique using hierarchical Hidden Markov Models(HMMs). The proposed method extracts histograms from a low-frequency subband and moments of edge components from high-frequency subbands of wavelet transformed images. Then each HMM is trained by using histogram difference and directional moment difference, respectively, extracted from manually labeled video. The video segmentation process consists of two steps. A histogram-based HMM is first used to segment the input video sequence into three categories: shot, cut, gradual scene changes. In the second stage, a moment-based HMM is used to further segment the gradual changes into a fade and a dissolve. The experimental results show that the proposed technique is more effective in partitioning video frames than the previous threshold-based methods.

Caption Detection and Recognition for Video Image Information Retrieval (비디오 영상 정보 검색을 위한 문자 추출 및 인식)

  • 구건서
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.901-914
    • /
    • 2002
  • In this paper, We propose an efficient automatic caption detection and location method, caption recognition using FE-MCBP(Feature Extraction based Multichained BackPropagation) neural network for content based retrieval of video. Frames are selected at fixed time interval from video and key frames are selected by gray scale histogram method. for each key frames, segmentation is performed and caption lines are detected using line scan method. lastly each characters are separated. This research improves speed and efficiency by color segmentation using local maximum analysis method before line scanning. Caption detection is a first stage of multimedia database organization and detected captions are used as input of text recognition system. Recognized captions can be searched by content based retrieval method.

  • PDF

Skin Segmentation Using YUV and RGB Color Spaces

  • Al-Tairi, Zaher Hamid;Rahmat, Rahmita Wirza;Saripan, M. Iqbal;Sulaiman, Puteri Suhaiza
    • Journal of Information Processing Systems
    • /
    • v.10 no.2
    • /
    • pp.283-299
    • /
    • 2014
  • Skin detection is used in many applications, such as face recognition, hand tracking, and human-computer interaction. There are many skin color detection algorithms that are used to extract human skin color regions that are based on the thresholding technique since it is simple and fast for computation. The efficiency of each color space depends on its robustness to the change in lighting and the ability to distinguish skin color pixels in images that have a complex background. For more accurate skin detection, we are proposing a new threshold based on RGB and YUV color spaces. The proposed approach starts by converting the RGB color space to the YUV color model. Then it separates the Y channel, which represents the intensity of the color model from the U and V channels to eliminate the effects of luminance. After that the threshold values are selected based on the testing of the boundary of skin colors with the help of the color histogram. Finally, the threshold was applied to the input image to extract skin parts. The detected skin regions were quantitatively compared to the actual skin parts in the input images to measure the accuracy and to compare the results of our threshold to the results of other's thresholds to prove the efficiency of our approach. The results of the experiment show that the proposed threshold is more robust in terms of dealing with the complex background and light conditions than others.