• 제목/요약/키워드: Saliency Pixel

검색결과 13건 처리시간 0.024초

가우스 가중치를 이용한 돌출 값 추정을 위한 방법 (The Method to Estimate Saliency Values using Gauss Weight)

  • 유영중
    • 한국정보통신학회논문지
    • /
    • 제17권4호
    • /
    • pp.965-970
    • /
    • 2013
  • 이미지로부터 돌출 영역을 추출하는 것은 이후의 다양한 이미지 처리를 위한 사전 작업으로서 중요한 의미를 가진다. 이 논문에서는 하나의 이미지에서 각 픽셀의 돌출 값을 추정하기 위한 개선된 방법을 소개한다. 논문에서 제안되는 방법은 이전에 연구된 색상과 통계적 방법을 이용한 돌출 값 추정 방법을 개선한 방법이다. 먼저 이미지에서 픽셀들의 색상관계를 이용해 각 픽셀의 돌출 값을 계산하고, 이 값을 근거로 중심 돌출 픽셀을 추정한다. 추정된 중심 돌출 픽셀을 기준으로 가우스 가중치를 적용하여 각 픽셀의 돌출 값을 재추정하고, 통계적 돌출 값 추정에 적용할 초기 확률을 위해 각 픽셀의 돌출 여부가 결정된다. 마지막으로 각 픽셀의 돌출 값은 베이즈 확률을 사용하여 계산된다. 실험결과는 본 논문의 적용 방법이 적정한 크기의 돌출 영역을 가진 이미지에 대해 이전의 방법보다 우수한 결과를 보임을 보여준다.

스테레오 영상 기반의 객체 탐지 및 객체의 3차원 위치 추정 (Object Detection and 3D Position Estimation based on Stereo Vision)

  • 손행선;이선영;민경원;서성진
    • 한국정보전자통신기술학회논문지
    • /
    • 제10권4호
    • /
    • pp.318-324
    • /
    • 2017
  • 본 항공기에 스테레오 카메라를 장착하여 영상 기반의 비행 객체 탐지 및 탐지된 객체의 3차원 위치를 추정하는 방법을 제시하였다. 구름 사이에 존재할 수 있는 원거리의 작은 객체를 탐지하기 위한 방법으로 PCT 기반의 Saliency Map을 생성하여 이용하였으며, 이렇게 탐지된 객체는 좌우 스테레오 영상에서 매칭을 수행하여 스테레오 시차(Disparity)를 추출하였다. 정확한 Disparity를 추출하기 위하여 비용집적(Cost Aggregation) 영역을 탐지 객체에 맞추어 가변되도록 가변 영역으로 사용하였으며, 본 논문에서는 Saliency Map에서 객체의 존재 영역으로 검출된 결과를 사용하였다. 좀 더 정밀한 Disparity를 추출하기 위하여 Sub-pixel interpolation 기법을 사용하여 Sub-pixel 레벨의 실수형 Disparity를 추출하였다. 또한 이에 카메라 파라미터를 적용하여 실제 탐지된 비행 객체의 3차원 공간 좌표를 생성하여 객체의 공간위치를 추정하는 방법을 제시하였다. 이는 향후 자율비행체의 영상기반 객체 탐지 및 충돌방지 시스템에 활용될 수 있을 것으로 기대된다.

Triqubit-State Measurement-Based Image Edge Detection Algorithm

  • Wang, Zhonghua;Huang, Faliang
    • Journal of Information Processing Systems
    • /
    • 제14권6호
    • /
    • pp.1331-1346
    • /
    • 2018
  • Aiming at the problem that the gradient-based edge detection operators are sensitive to the noise, causing the pseudo edges, a triqubit-state measurement-based edge detection algorithm is presented in this paper. Combing the image local and global structure information, the triqubit superposition states are used to represent the pixel features, so as to locate the image edge. Our algorithm consists of three steps. Firstly, the improved partial differential method is used to smooth the defect image. Secondly, the triqubit-state is characterized by three elements of the pixel saliency, edge statistical characteristics and gray scale contrast to achieve the defect image from the gray space to the quantum space mapping. Thirdly, the edge image is outputted according to the quantum measurement, local gradient maximization and neighborhood chain code searching. Compared with other methods, the simulation experiments indicate that our algorithm has less pseudo edges and higher edge detection accuracy.

향상된 세일리언시 맵과 슈퍼픽셀 기반의 효과적인 영상 분할 (Efficient Image Segmentation Algorithm Based on Improved Saliency Map and Superpixel)

  • 남재현;김병규
    • 한국멀티미디어학회논문지
    • /
    • 제19권7호
    • /
    • pp.1116-1126
    • /
    • 2016
  • Image segmentation is widely used in the pre-processing stage of image analysis and, therefore, the accuracy of image segmentation is important for performance of an image-based analysis system. An efficient image segmentation method is proposed, including a filtering process for super-pixels, improved saliency map information, and a merge process. The proposed algorithm removes areas that are not equal or of small size based on comparison of the area of smoothed superpixels in order to maintain generation of a similar size super pixel area. In addition, application of a bilateral filter to an existing saliency map that represents human visual attention allows improvement of separation between objects and background. Finally, a segmented result is obtained based on the suggested merging process without any prior knowledge or information. Performance of the proposed algorithm is verified experimentally.

The Method to Measure Saliency Values for Salient Region Detection from an Image

  • Park, Seong-Ho;Yu, Young-Jung
    • Journal of information and communication convergence engineering
    • /
    • 제9권1호
    • /
    • pp.55-58
    • /
    • 2011
  • In this paper we introduce an improved method to measure saliency values of pixels from an image. The proposed saliency measure is formulated using local features of color and a statistical framework. In the preprocessing step, rough salient pixels are determined as the local contrast of an image region with respect to its neighborhood at various scales. Then, the saliency value of each pixel is calculated by Bayes' rule using rough salient pixels. The experiments show that our approach outperforms the current Bayes' rule based method.

Building Change Detection Using Deep Learning for Remote Sensing Images

  • Wang, Chang;Han, Shijing;Zhang, Wen;Miao, Shufeng
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.587-598
    • /
    • 2022
  • To increase building change recognition accuracy, we present a deep learning-based building change detection using remote sensing images. In the proposed approach, by merging pixel-level and object-level information of multitemporal remote sensing images, we create the difference image (DI), and the frequency-domain significance technique is used to generate the DI saliency map. The fuzzy C-means clustering technique pre-classifies the coarse change detection map by defining the DI saliency map threshold. We then extract the neighborhood features of the unchanged pixels and the changed (buildings) from pixel-level and object-level feature images, which are then used as valid deep neural network (DNN) training samples. The trained DNNs are then utilized to identify changes in DI. The suggested strategy was evaluated and compared to current detection methods using two datasets. The results suggest that our proposed technique can detect more building change information and improve change detection accuracy.

Location-Based Saliency Maps from a Fully Connected Layer using Multi-Shapes

  • Kim, Hoseung;Han, Seong-Soo;Jeong, Chang-Sung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권1호
    • /
    • pp.166-179
    • /
    • 2021
  • Recently, with the development of technology, computer vision research based on the human visual system has been actively conducted. Saliency maps have been used to highlight areas that are visually interesting within the image, but they can suffer from low performance due to external factors, such as an indistinct background or light source. In this study, existing color, brightness, and contrast feature maps are subjected to multiple shape and orientation filters and then connected to a fully connected layer to determine pixel intensities within the image based on location-based weights. The proposed method demonstrates better performance in separating the background from the area of interest in terms of color and brightness in the presence of external elements and noise. Location-based weight normalization is also effective in removing pixels with high intensity that are outside of the image or in non-interest regions. Our proposed method also demonstrates that multi-filter normalization can be processed faster using parallel processing.

Multi-scale Diffusion-based Salient Object Detection with Background and Objectness Seeds

  • Yang, Sai;Liu, Fan;Chen, Juan;Xiao, Dibo;Zhu, Hairong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권10호
    • /
    • pp.4976-4994
    • /
    • 2018
  • The diffusion-based salient object detection methods have shown excellent detection results and more efficient computation in recent years. However, the current diffusion-based salient object detection methods still have disadvantage of detecting the object appearing at the image boundaries and different scales. To address the above mentioned issues, this paper proposes a multi-scale diffusion-based salient object detection algorithm with background and objectness seeds. In specific, the image is firstly over-segmented at several scales. Secondly, the background and objectness saliency of each superpixel is then calculated and fused in each scale. Thirdly, manifold ranking method is chosen to propagate the Bayessian fusion of background and objectness saliency to the whole image. Finally, the pixel-level saliency map is constructed by weighted summation of saliency values under different scales. We evaluate our salient object detection algorithm with other 24 state-of-the-art methods on four public benchmark datasets, i.e., ASD, SED1, SED2 and SOD. The results show that the proposed method performs favorably against 24 state-of-the-art salient object detection approaches in term of popular measures of PR curve and F-measure. And the visual comparison results also show that our method highlights the salient objects more effectively.

360 VR 영상 제작을 위한 Saliency Map 기반 Seam Finding 알고리즘 (Modified Seam Finding Algorithm based on Saliency Map to Generate 360 VR Image)

  • 한현덕;한종기
    • 방송공학회논문지
    • /
    • 제24권6호
    • /
    • pp.1096-1112
    • /
    • 2019
  • 현재 360 VR 이미지를 만들어주는 카메라들은 상당히 고가이기에 사람들이 손쉽게 사용할 순 없는 상황이다. 이를 해결하기 위해 휴대 전화의 카메라를 이용해 100여 장의 사진을 360° 촬영을 한 후 Image stitching으로 360 VR 영상을 얻고자 한다. 기존의 장비는 한 번에 360℃ 촬영으로 VR 영상을 만들어내는 반면 휴대 전화를 이용하여 촬영할 경우 영상마다 시차가 생기게 된다. 이로 인해 움직이는 물체가 있는 경우 물체가 여러 장의 영상에서 나타나는 원하지 않는 상황이 생기게 되고 Seam이 물체를 관통하여 부자연스러운 결과 영상을 얻게 된다. 본 논문에서는 시각적으로 두드러지는 물체를 판별할 수 있는 Saliency map을 이용한 Seam finder 알고리즘을 통해 개선된 결과 영상을 얻을 수 있음을 확인했다.

Salient Object Detection Based on Regional Contrast and Relative Spatial Compactness

  • Xu, Dan;Tang, Zhenmin;Xu, Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권11호
    • /
    • pp.2737-2753
    • /
    • 2013
  • In this study, we propose a novel salient object detection strategy based on regional contrast and relative spatial compactness. Our algorithm consists of four basic steps. First, we learn color names offline using the probabilistic latent semantic analysis (PLSA) model to find the mapping between basic color names and pixel values. The color names can be used for image segmentation and region description. Second, image pixels are assigned to special color names according to their values, forming different color clusters. The saliency measure for every cluster is evaluated by its spatial compactness relative to other clusters rather than by the intra variance of the cluster alone. Third, every cluster is divided into local regions that are described with color name descriptors. The regional contrast is evaluated by computing the color distance between different regions in the entire image. Last, the final saliency map is constructed by incorporating the color cluster's spatial compactness measure and the corresponding regional contrast. Experiments show that our algorithm outperforms several existing salient object detection methods with higher precision and better recall rates when evaluated using public datasets.