• Title/Summary/Keyword: Photometric Invariance

Search Result 5, Processing Time 0.02 seconds

Moving Object Tracking using Cumulative Similarity Transform (누적 유사도 변환을 이용한 물체 추적)

  • Choo, Moon-Won
    • The Journal of the Korea Contents Association
    • /
    • v.3 no.1
    • /
    • pp.58-63
    • /
    • 2003
  • In this paper, an object tracking system in a known environment is proposed. It extracts moving area shaped on objects in video sequences and decides tracks of moving objects. Color invarianoe features are exploited to extract the plausible object blocks and the degree of radial homogeneity, which is utilized as local block feature to find out the block correspondences. The experimental results are given.

  • PDF

Multiple Object Tracking using Color Invariants (색상 불변값을 이용한 물체 괘적 추적)

  • Choo, Moon Won;Choi, Young Mie;Hong, Ki-Cheon
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2002.11b
    • /
    • pp.101-109
    • /
    • 2002
  • In this paper, multiple object tracking system in a known environment is proposed. It extracts moving areas shaped on objects in video sequences and detects racks of moving objects. Color invariant co-occurrence matrices are exploited to extract the plausible object blocks and the correspondences between adjacent video frames. The measures of class separability derived from the features of co-occurrence matrices are used to improve the performance of tracking. The experimented results are presented.

  • PDF

Automatic crack detection using quantum-inspired firefly algorithm with deep learning techniques

  • K.A. Vinodhini;K.R. Aswin Sidhaarth;K.A. Varun Kumar
    • Advances in concrete construction
    • /
    • v.18 no.2
    • /
    • pp.147-155
    • /
    • 2024
  • Detecting and quantifying cracks in bituminous (asphalt) road surfaces plays a crucial role in maintaining road infrastructure integrity and enabling cost-effective maintenance strategies. However, traditional manual inspections are laborious, time-intensive, and susceptible to inconsistencies due to factors like human fatigue, varying expertise levels, and subjective assessments. To address these challenges, this research proposes CrackNet, an innovative deep learning framework that harnesses state-of-the-art computer vision and object detection techniques for accurate and computationally efficient automated crack detection in bituminous road imagery. CrackNet introduces a novel hybrid neural network architecture that seamlessly integrates a cutting-edge Vision Transformer backbone with multi-scale convolutional feature fusion modules. The Vision Transformer component excels at capturing long-range structural dependencies and global contextual information, while the multi-scale fusion modules adeptly combine fine-grained crack details across various spatial resolutions. This unique design enables CrackNet to holistically model intricate crack topologies while preserving localized characteristics and intricate details. To further bolster robustness and generalization capabilities across diverse real-world scenarios, CrackNet incorporates self-supervised pre-training techniques that leverage unlabeled data and unsupervised pretext tasks. These strategies allow CrackNet to learn rich visual representations tailored specifically for crack detection. Additionally, an extensive data augmentation pipeline is employed, encompassing geometric, photometric, and adversarial transformations, to enhance model invariance to varying imaging conditions and environmental factors. The accuracy achieved by the newly proposed approach surpasses that of current state-of-the-art methodologies, reaching an impressive 97.8%.

Comparisons of Color Spaces for Shadow Elimination (그림자 제거를 위한 색상 공간의 비교)

  • Lee, Gwang-Gook;Uzair, Muhammad;Yoon, Ja-Young;Kim, Jae-Jun;Kim, Whoi-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.610-622
    • /
    • 2008
  • Moving object segmentation is an essential technique for various video surveillance applications. The result of moving object segmentation often contains shadow regions caused by the color difference of shadow pixels. Hence, moving object segmentation is usually followed by a shadow elimination process to remove the false detection results. The common assumption adopted in previous works is that, under the illumination variation, the value of chromaticity components are preserved while the value of intensity component is changed. Hence, color transforms which separates luminance component and chromaticity component are usually utilized to remove shadow pixels. In this paper, various color spaces (YCbCr, HSI, normalized rgb, Yxy, Lab, c1c2c3) are examined to find the most appropriate color space for shadow elimination. So far, there have been some research efforts to compare the influence of various color spaces for shadow elimination. However, previous efforts are somewhat insufficient to compare the color distortions under illumination change in diverse color spaces, since they used a specific shadow elimination scheme or different thresholds for different color spaces. In this paper, to relieve the limitations of previous works, (1) the amount of gradients in shadow boundaries drawn to uniform colored regions are examined only for chromaticity components to compare the color distortion under illumination change and (2) the accuracy of background subtraction are analyzed via RoC curves to compare different color spaces without the problem of threshold level selection. Through experiments on real video sequences, YCbCr and normalized rgb color spaces showed good results for shadow elimination among various color spaces used for the experiments.

  • PDF