• Title/Summary/Keyword: gray histogram analysis

Search Result 37, Processing Time 0.022 seconds

Technique of Seam-Line Extraction for Automatic Image Mosaic Generation (자동 모자이크 영상제작을 위한 접합선 추출기법에 관한 연구)

  • Song, Nak-Hyeon;Lee, Sung-Hun;Oh, Kum-Hui;Cho, Woo-Sug
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.1
    • /
    • pp.47-53
    • /
    • 2007
  • Satellite image mosaicking is essential for image interpretation and analysis especially for a large area such as the Korean Peninsula. This paper proposed the technique of automatic seam-line extraction and the method of creating image mosaic in automated fashion. The seam-line to minimize artificial discontinuity was extracted using Minimum Absolute Gray Difference Sum algorithm with constraint condition on search-area width and Canny Edge Detection algorithm. To maintain the radiometric balance among images acquired at different time epochs, we utilized Match Cumulative Frequency method. Experimental results showed that edge detection algorithm extracted the seam-lines significantly well along linear features such as roads and rivers.

Robust Method of Video Contrast Enhancement for Sudden Illumination Changes (급격한 조명 변화에 강건한 동영상 대조비 개선 방법)

  • Park, Jin Wook;Moon, Young Shik
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.55-65
    • /
    • 2015
  • Contrast enhancement methods for a single image applied to videos may cause flickering artifacts because these methods do not consider continuity of videos. On the other hands, methods considering the continuity of videos can reduce flickering artifacts but it may cause unnecessary fade-in/out artifacts when the intensity of videos changes abruptly. In this paper, we propose a robust method of video contrast enhancement for sudden illumination changes. The proposed method enhances each frame by Fast Gray-Level Grouping(FGLG) and considers the continuity of videos by an exponential smoothing filter. The proposed method calculates the smoothing factor of an exponential smoothing filter using a sigmoid function and applies to each frame to reduce unnecessary fade-in/out effects. In the experiment, 6 measurements are used for the performance analysis of the proposed method and traditional methods. Through the experiment. it has been shown that the proposed method demonstrates the best quantitative performance of MSSIM and Flickering score and show the adaptive enhancement under sudden illumination change through the visual quality comparison.

Caption Detection and Recognition for Video Image Information Retrieval (비디오 영상 정보 검색을 위한 문자 추출 및 인식)

  • 구건서
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.7
    • /
    • pp.901-914
    • /
    • 2002
  • In this paper, We propose an efficient automatic caption detection and location method, caption recognition using FE-MCBP(Feature Extraction based Multichained BackPropagation) neural network for content based retrieval of video. Frames are selected at fixed time interval from video and key frames are selected by gray scale histogram method. for each key frames, segmentation is performed and caption lines are detected using line scan method. lastly each characters are separated. This research improves speed and efficiency by color segmentation using local maximum analysis method before line scanning. Caption detection is a first stage of multimedia database organization and detected captions are used as input of text recognition system. Recognized captions can be searched by content based retrieval method.

  • PDF

Distribution Mapping and Local Analysis of Ciliary Beat Frequencies (세포의 섬모 운동 변화 분석을 위한 CBF 분포도 구성 및 국소적 분포 분석에 관한 연구)

  • Yi, W.J.;Park, K.S.;Min, Y.G.;Sung, M.W.;Lee, K.S.
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1997 no.11
    • /
    • pp.154-160
    • /
    • 1997
  • By their rapid and periodic actions, the cilia of the human respiratory tract play an important role in clearing inhaled noxious particles. Based on the automated image-processing technique, we studied ciliary beat frequency (CBF) objectively and quantitatively. Microscopic ciliary images were transformed into digitized gray ones through an image-grabber, and from these we extracted signals or CBF. By means of a FFT, maximum peak frequencies were detected as CBFs in each partitioned block or the entire digitized field. With these CBFs, we composed distribution maps visualiy showing the spatial distribution of CBFs. Through distribution maps of CBF, the whole aspects of CBF changes or cells and the difference of CBF of neighboring cells can be easily measured and detected. Histogram statistics calculated from the user-defined polygonal window can show the local dominant frequency presumed to be the CBF of a cell or a crust the region includes.

  • PDF

Implementation of the System Converting Image into Music Signals based on Intentional Synesthesia (의도적인 공감각 기반 영상-음악 변환 시스템 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • v.24 no.1
    • /
    • pp.254-259
    • /
    • 2020
  • This paper is the implementation of the conversion system from image to music based on intentional synesthesia. The input image based on color, texture, and shape was converted into melodies, harmonies and rhythms of music, respectively. Depending on the histogram of colors, the melody can be selected and obtained probabilistically to form the melody. The texture in the image expressed harmony and minor key with 7 characteristics of GLCM, a statistical texture feature extraction method. Finally, the shape of the image was extracted from the edge image, and using Hough Transform, a frequency component analysis, the line components were detected to produce music by selecting the rhythm according to the distribution of angles.

Color Component Analysis For Image Retrieval (이미지 검색을 위한 색상 성분 분석)

  • Choi, Young-Kwan;Choi, Chul;Park, Jang-Chun
    • The KIPS Transactions:PartB
    • /
    • v.11B no.4
    • /
    • pp.403-410
    • /
    • 2004
  • Recently, studies of image analysis, as the preprocessing stage for medical image analysis or image retrieval, are actively carried out. This paper intends to propose a way of utilizing color components for image retrieval. For image retrieval, it is based on color components, and for analysis of color, CLCM (Color Level Co-occurrence Matrix) and statistical techniques are used. CLCM proposed in this paper is to project color components on 3D space through geometric rotate transform and then, to interpret distribution that is made from the spatial relationship. CLCM is 2D histogram that is made in color model, which is created through geometric rotate transform of a color model. In order to analyze it, a statistical technique is used. Like CLCM, GLCM (Gray Level Co-occurrence Matrix)[1] and Invariant Moment [2,3] use 2D distribution chart, which use basic statistical techniques in order to interpret 2D data. However, even though GLCM and Invariant Moment are optimized in each domain, it is impossible to perfectly interpret irregular data available on the spatial coordinates. That is, GLCM and Invariant Moment use only the basic statistical techniques so reliability of the extracted features is low. In order to interpret the spatial relationship and weight of data, this study has used Principal Component Analysis [4,5] that is used in multivariate statistics. In order to increase accuracy of data, it has proposed a way to project color components on 3D space, to rotate it and then, to extract features of data from all angles.

Bar Code Location Algorithm Using Pixel Gradient and Labeling (화소의 기울기와 레이블링을 이용한 효율적인 바코드 검출 알고리즘)

  • Kim, Seung-Jin;Jung, Yoon-Su;Kim, Bong-Seok;Won, Jong-Un;Won, Chul-Ho;Cho, Jin-Ho;Lee, Kuhn-Il
    • The KIPS Transactions:PartD
    • /
    • v.10D no.7
    • /
    • pp.1171-1176
    • /
    • 2003
  • In this paper, we propose an effective bar code detection algorithm using the feature analysis and the labeling. After computing the direction of pixels using four line operators, we obtain the histogram about the direction of pixels by a block unit. We calculate the difference between the maximum value and the minimum value of the histogram and consider the block that have the largest difference value as the block of the bar code region. We get the line passing by the bar code region with the selected block but detect blocks of interest to get the more accurate line. The largest difference value is used to decide the threshold value to obtain the binary image. After obtaining a binary image, we do the labeling about the binary image. Therefore, we find blocks of interest in the bar code region. We calculate the gradient and the center of the bar code with blocks of interest, and then get the line passing by the bar code and detect the bar code. As we obtain the gray level of the line passing by the bar code, we grasp the information of the bar code.

VirtualDub as a Useful Program for Video Recording in Real-time TEM Analysis (실시간 TEM 분석에 유용한 영상 기록 프로그램, VirtualDub)

  • Kim, Jin-Gyu;Oh, Sang-Ho;Song, Kyung;Yoo, Seung-Jo;Kim, Young-Min
    • Applied Microscopy
    • /
    • v.40 no.1
    • /
    • pp.47-51
    • /
    • 2010
  • The capability of real-time observation in TEM is quite useful to study dynamic phenomena of materials in a certain variable ambience. In performing the experiment, the choice of video recording program is an important factor to obtain high quality of movie streaming. Window Movie Maker (WMM) is generally recommended as a default video recording program if one uses "DV Capture" function in DigitalMicrograph$^{TM}$ (DM) software. However, the image quality does not often satisfy the condition for high-resolution microscopic analysis since the severe information loss in the final result occurs during the conversion process. As a good candidate to overcome this problem, Virtual-Dub is highly recommended since the information loss can be minimized through the streaming process. In this report, we demonstrated how useful VirtualDub works in a high-resolution movie recording. Quantitative comparison of the information quality between the images recorded by each software, WMM and VirtualDub, was carried out based on histogram analysis. As a result, the image recorded by VirtualDub was improved ~13% in brightness and ~122% in contrast compared with the image obtained by WMM at the same imaging condition. Remarkably, the gray gradation (meaning an amount of information) becomes wider up to ~115% than that of the WMM result.

Automatic Leather Quality Inspection and Grading System by Leather Texture Analysis (텍스쳐 분석에 의한 피혁 등급 판정 및 자동 선별시스템에의 응용)

  • 권장우;김명재;길경석
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.4
    • /
    • pp.451-458
    • /
    • 2004
  • A leather quality inspection by naked eyes has known as unreliable because of its biological characteristics like accumulated fatigue caused from an optical illusion and biological phenomenon. Therefore it is necessary to automate the leather quality inspection by computer vision technique. In this paper, we present automatic leather qua1ity classification system get information from leather surface. Leather is usually graded by its information such as texture density, types and distribution of defects. The presented algorithm explain how we analyze leather information like texture density and defects from the gray-level images obtained by digital camera. The density data is computed by its ratio of distribution area, width, and height of Fourier spectrum magnitude. And the defect information of leather surface can be obtained by histogram distribution of pixels which is Windowed from preprocessed images. The information for entire leather could be a standard for grading leather quality. The proposed leather inspection system using machine vision can also be applied to another field to substitute human eye inspection.

  • PDF

Performance Improvement of Human Detection in Thermal Images using Principal Component Analysis and Blob Clustering (주성분 분석과 Blob 군집화를 이용한 열화상 사람 검출 시스템의 성능 향상)

  • Jo, Ahra;Park, Jeong-Sik;Seo, Yong-Ho;Jang, Gil-Jin
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.13 no.2
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a human detection technique using thermal imaging camera. The proposed method is useful at night or rainy weather where a visible light imaging cameras is not able to detect human activities. Under the observation that a human is usually brighter than the background in the thermal images, we estimate the preliminary human regions using the statistical confidence measures in the gray-level, brightness histogram. Afterwards, we applied Gaussian filtering and blob labeling techniques to remove the unwanted noise, and gather the scattered of the pixel distributions and the center of gravities of the blobs. In the final step, we exploit the aspect ratio and the area on the unified object region as well as a number of the principal components extracted from the object region images to determine if the detected object is a human. The experimental results show that the proposed method is effective in environments where visible light cameras are not applicable.