• Title/Summary/Keyword: Multi-level Cues

Search Result 8, Processing Time 0.023 seconds

PROPAGATION OF MULTI-LEVEL CUES WITH ADAPTIVE CONFIDENCE FOR BILAYER SEGMENTATION OF CONSISTENT SCENE IMAGES

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.148-153
    • /
    • 2009
  • Few methods have dealt with segmenting multiple images with analogous content. Concurrent images of a scene and gathered images of a similar foreground are examples of these images, which we term consistent scene images. In this paper, we present a method to segment these images based on manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence. The cues are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. Propagated cues are used to compute potentials in an MRF framework, and segmentation is done by energy minimization. Through this process, the proposed method attempts to maximize the amount of extracted information and maximize the consistency of segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

  • PDF

Bilayer Segmentation of Consistent Scene Images by Propagation of Multi-level Cues with Adaptive Confidence (다중 단계 신호의 적응적 전파를 통한 동일 장면 영상의 이원 영역화)

  • Lee, Soo-Chahn;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of Broadcast Engineering
    • /
    • v.14 no.4
    • /
    • pp.450-462
    • /
    • 2009
  • So far, many methods for segmenting single images or video have been proposed, but few methods have dealt with multiple images with analogous content. These images, which we term consistent scene images, include concurrent images of a scene and gathered images of a similar foreground, and may be collectively utilized to describe a scene or as input images for multi-view stereo. In this paper, we present a method to segment these images with minimum user input, specifically, manual segmentation of one image, by iteratively propagating information via multi-level cues with adaptive confidence depending on the nature of the images. Propagated cues are used as the bases to compute multi-level potentials in an MRF framework, and segmentation is done by energy minimization. Both cues and potentials are classified as low-, mid-, and high- levels based on whether they pertain to pixels, patches, and shapes. A major aspect of our approach is utilizing mid-level cues to compute low- and mid- level potentials, and high-level cues to compute low-, mid-, and high- level potentials, thereby making use of inherent information. Through this process, the proposed method attempts to maximize the amount of both extracted and utilized information in order to maximize the consistency of the segmentation. We demonstrate the effectiveness of the proposed method on several sets of consistent scene images and provide a comparison with results based only on mid-level cues [1].

Images Automatic Annotation: Multi-cues Integration (영상의 자동 주석: 멀티 큐 통합)

  • Shin, Seong-Yoon;Ahn, Eun-Mi;Rhee, Yang-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.589-590
    • /
    • 2010
  • All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases, and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Implementation of the Perception Process in Human‐Vehicle Interactive Models(HVIMs) Considering the Effects of Auditory Peripheral Cues (청각 주변 자극의 효과를 고려한 효율적 차량-운전자 상호 연동 모델 구현 방법론)

  • Rah, Chong-Kwan;Park, Min-Yong
    • Journal of the Ergonomics Society of Korea
    • /
    • v.25 no.3
    • /
    • pp.67-75
    • /
    • 2006
  • HVIMs consists of simulated driver models implemented with series of mathematical functions and computerized vehicle dynamic models. To effectively model the perception process, as a part of driver models, psychophysical nonlinearity should be considered not only for the single-modal stimulus but for the stimulus of multiple modalities and interactions among them. A series of human factors experiments were conducted using the primary sensory of visual and auditory modalities to find out the effects of auditory cues in visual velocity estimation tasks. The variations of auditory cues were found to enhance/reduce the perceived intensity of velocity as the level changed. These results indicate that the conventional psychophysical power functions could not applied for the perception process of the HVIMs with multi-modal stimuli. 'Ruled surfaces' in a 3-D coordinate system(with the intensities of both kinds of stimuli and the ratio of enhancement, respectively for each coordinate) were suggested to model the realistic perception process of multi-modal HVIMs.

A WWW Images Automatic Annotation Based On Multi-cues Integration (멀티-큐 통합을 기반으로 WWW 영상의 자동 주석)

  • Shin, Seong-Yoon;Moon, Hyung-Yoon;Rhee, Yang-Won
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.4
    • /
    • pp.79-86
    • /
    • 2008
  • As the rapid development of the Internet, the embedded images in HTML web pages nowadays become predominant. For its amazing function in describing the content and attracting attention, images become substantially important in web pages. All these images consist a considerable database. What's more, the semantic meanings of images are well presented by the surrounding text and links. But only a small minority of these images have precise assigned keyphrases. and manually assigning keyphrases to existing images is very laborious. Therefore it is highly desirable to automate the keyphrases extraction process. In this paper, we first introduce WWW image annotation methods, based on low level features, page tags, overall word frequency and local word frequency. Then we put forward our method of multi-cues integration image annotation. Also, show multi-cue image annotation method is more superior than other method through an experiment.

  • PDF

SPATIAL EXPLANATIONS OF SPEECH PERCEPTION: A STUDY OF FRICATIVES

  • Choo, Won;Mark Huckvale
    • Proceedings of the KSPS conference
    • /
    • 1996.10a
    • /
    • pp.399-403
    • /
    • 1996
  • This paper addresses issues of perceptual constancy in speech perception through the use of a spatial metaphor for speech sound identity as opposed to a more conventional characterisation with multiple interacting acoustic cues. This spatial representation leads to a correlation between phonetic, acoustic and auditory analyses of speech sounds which can serve as the basis for a model of speech perception based on the general auditory characteristics of sounds. The correlations between the phonetic, perceptual and auditory spaces of the set of English voiceless fricatives /f $\theta$ s $\int$ h / are investigated. The results show that the perception of fricative segments may be explained in terms of 2-dimensional auditory space in which each segment occupies a region. The dimensions of the space were found to be the frequency of the main spectral peak and the 'peakiness' of spectra. These results support the view that perception of a segment is based on its occupancy of a multi-dimensional parameter space. In this way, final perceptual decisions on segments can be postponed until higher level constraints can also be met.

  • PDF

Higher-order Factor Structure of Consumer Dissatisfaction with Clothing -Off-line Purchase and Usage- (의복 불만족의 고차요인구조 -오프라인 의복구매 및 사용-)

  • Ahn, Soo-Kyoung
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.35 no.5
    • /
    • pp.561-574
    • /
    • 2011
  • This study investigates the ultimate factor structure of consumer dissatisfaction with the off-line purchase and usage of clothing. It identifies the determinant dimensions of consumer dissatisfaction on clothing purchase and usage and investigates the hierarchical structure of consumer dissatisfaction by assessing and comparing the effectiveness of five alternative factor structure models. A total of 300 women were online-surveyed to assess the level of dissatisfaction based on their dissatisfying experience with clothing purchases and usage in terms of product quality, price, salesperson's attitude, and store environment. The exploratory factor analysis identified the underlying dimensions of dissatisfaction: Handling, Aesthetics, Salesperson, Size, Price, Product Quality, Service, and Environment. By employing a first-order confirmatory factor analysis and higher-order confirmatory factor analysis, consumer dissatisfaction was confirmed to have a hierarchical structure with three second-order constructs Intrinsic instrument is manifested by handling, quality, and size, Intrinsic expression consists of service, salesperson, and environment, and Extrinsic circumstance contains aesthetics and price. On empirically demonstrating the multi-dimensional constructs of consumer dissatisfaction by identifying its hierarchical structure, the study provides the theoretical and practical insights to comprehend consumer purchase and post-purchase behavior. Specifically, it affords an empirical platform to extend the scope of research with condensed concepts of dissatisfaction to researchers. In addition, it also enables marketers to take a broader view of consumer dissatisfaction by providing cues about potential problems and identifying the source of those problems.