• 제목/요약/키워드: visual model

검색결과 2,021건 처리시간 0.032초

A Model-Based Image Steganography Method Using Watson's Visual Model

  • Fakhredanesh, Mohammad;Safabakhsh, Reza;Rahmati, Mohammad
    • ETRI Journal
    • /
    • 제36권3호
    • /
    • pp.479-489
    • /
    • 2014
  • This paper presents a model-based image steganography method based on Watson's visual model. Model-based steganography assumes a model for cover image statistics. This approach, however, has some weaknesses, including perceptual detectability. We propose to use Watson's visual model to improve perceptual undetectability of model-based steganography. The proposed method prevents visually perceptible changes during embedding. First, the maximum acceptable change in each discrete cosine transform coefficient is extracted based on Watson's visual model. Then, a model is fitted to a low-precision histogram of such coefficients and the message bits are encoded to this model. Finally, the encoded message bits are embedded in those coefficients whose maximum possible changes are visually imperceptible. Experimental results show that changes resulting from the proposed method are perceptually undetectable, whereas model-based steganography retains perceptually detectable changes. This perceptual undetectability is achieved while the perceptual quality - based on the structural similarity measure - and the security - based on two steganalysis methods - do not show any significant changes.

비쥬얼 롭을 사용한 다수표적 탐색의 수행도 예측 (Predicting Human Performance of Multiple-Target Search Using a Visual Lobe)

  • 홍승권
    • 대한인간공학회지
    • /
    • 제28권3호
    • /
    • pp.55-62
    • /
    • 2009
  • This study is concerned with predicting human search performance using a visual lobe. The most previous studies on human performance in visual search have been limited to a single-target search. This study extended the visual search research to multiple-target search including targets of different types as well as targets of same types. A model for predicting visual search performance was proposed and the model was validated by human search data. Additionally, this study found that human subjects always did not use a constant ratio of the whole visual lobe size for each type of targets in visual search process. The more conspicuous the target is, the more ratio of the whole visual lobe size human subjects use. The model that can predict human performance in multiple-target search may facilitate visual inspection plan in manufacturing.

도시녹지의 시각적 접근성 측정모델에 관한 연구 (A study on the model of measuring visual accessibility to urban green spaces)

  • 임승빈;허윤정
    • 한국조경학회지
    • /
    • 제23권3호
    • /
    • pp.1-14
    • /
    • 1995
  • The aspect of visual accessibility to urban green spaces is an important factor because it contributes making pleasant environment by increasing the visual experience of nature in urban environment. But we have tried neither to consider nor to measure it. Since he concept of visual accessibility has not formally defined yet, it was operationally defined in this study. And then the model of measuring visual accessibility was suggested and verified through the case study on neighborhood parks in Seoul. The findings are as follows : 1) The concept of visual accessibility is defined as opportunity and potentiality to observe green spaces. 2) The model of measuring visual accessibility deals with not only adjacent area but also viewshed area. In adjacent area, considering factors are the area of road adjacent to green spaces and the area of exposed green spaces. In viewshed area, considering factors are the area of road located in viewshed area, the area of exposed green spaces, and the weight according to observing distance. 3) The final model of measuring visual accessibility suggested in this study is as follows.

  • PDF

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제9권7호
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

국소 집단 최적화 기법을 적용한 비정형 해저면 환경에서의 비주얼 SLAM (Visual SLAM using Local Bundle Optimization in Unstructured Seafloor Environment)

  • 홍성훈;김진환
    • 로봇학회논문지
    • /
    • 제9권4호
    • /
    • pp.197-205
    • /
    • 2014
  • As computer vision algorithms are developed on a continuous basis, the visual information from vision sensors has been widely used in the context of simultaneous localization and mapping (SLAM), called visual SLAM, which utilizes relative motion information between images. This research addresses a visual SLAM framework for online localization and mapping in an unstructured seabed environment that can be applied to a low-cost unmanned underwater vehicle equipped with a single monocular camera as a major measurement sensor. Typically, an image motion model with a predefined dimensionality can be corrupted by errors due to the violation of the model assumptions, which may lead to performance degradation of the visual SLAM estimation. To deal with the erroneous image motion model, this study employs a local bundle optimization (LBO) scheme when a closed loop is detected. The results of comparison between visual SLAM estimation with LBO and the other case are presented to validate the effectiveness of the proposed methodology.

Modeling the Visual Target Search in Natural Scenes

  • Park, Daecheol;Myung, Rohae;Kim, Sang-Hyeob;Jang, Eun-Hye;Park, Byoung-Jun
    • 대한인간공학회지
    • /
    • 제31권6호
    • /
    • pp.705-713
    • /
    • 2012
  • Objective: The aim of this study is to predict human visual target search using ACT-R cognitive architecture in real scene images. Background: Human uses both the method of bottom-up and top-down process at the same time using characteristics of image itself and knowledge about images. Modeling of human visual search also needs to include both processes. Method: In this study, visual target object search performance in real scene images was analyzed comparing experimental data and result of ACT-R model. 10 students participated in this experiment and the model was simulated ten times. This experiment was conducted in two conditions, indoor images and outdoor images. The ACT-R model considering the first saccade region through calculating the saliency map and spatial layout was established. Proposed model in this study used the guide of visual search and adopted visual search strategies according to the guide. Results: In the analysis results, no significant difference on performance time between model prediction and empirical data was found. Conclusion: The proposed ACT-R model is able to predict the human visual search process in real scene images using salience map and spatial layout. Application: This study is useful in conducting model-based evaluation in visual search, particularly in real images. Also, this study is able to adopt in diverse image processing program such as helper of the visually impaired.

생체 기반 시각정보처리 동작인식 모델링 (A Bio-Inspired Modeling of Visual Information Processing for Action Recognition)

  • 김진옥
    • 정보처리학회논문지:소프트웨어 및 데이터공학
    • /
    • 제3권8호
    • /
    • pp.299-308
    • /
    • 2014
  • 신체 동작, 얼굴 표정과 같이 아주 복잡한 생체 패턴을 인식하고 분류하는 인간의 능력을 모방한 정보처리 컴퓨팅 관련 연구가 최근 다수 등장하고 있다. 특히 컴퓨터비전 분야에서는 인간의 뛰어난 인지 능력 중 상황정보 없이 시각시퀀스에서 동작을 분류하는 기능을 통해 시공간적 패턴 코딩과 빠른 인식 방법을 이해하고자 한다. 본 연구는 비디오 시퀀스상의 동작인식에 생물학적 시각인지과정의 영향을 받은 생체 기반 컴퓨터비전 모델을 제시하였다. 제안 모델은 이미지 시퀀스에서 동작을 검출하고 시각 패턴을 판별하는 데 생체 시각처리과정의 신경망 구조 단계를 반영하였다. 실험을 통해 생체 기반 동작인식 모델이 인간 시각인지 처리의 여러 가지 속성을 고려했을 뿐 아니라 기존 동작인식시스템에 비해 시간 정합성이 뛰어나며 시간 변화에 강건한 분류 능력을 보임을 알 수 있다. 제안 모델은 지능형 로봇 에이전트와 같은 생체 기반 시각정보처리 시스템 구축에 기여할 수 있다.

제품형태 지각에 있어서 시각적 멘탈모델의 영향에 관한 연구 (A Study on the influence of Visual Mental model in human to percept product form)

  • 오해춘
    • 디자인학연구
    • /
    • 제15권1호
    • /
    • pp.407-414
    • /
    • 2002
  • 인간은 어떤 대상을 인지해 나가는 과정에서 심성모형을 이용해 보다 효율적으로 정보처리 한다. 만약 어떤 대상을 이해해 가는 과정에서 그들이 사용하고 있는 심성모형이 어떤 것인지를 알 수 있다면 그것을 어떻게 인지할 것인지를 우리는 알 수 있을 것이다. 마찬가지로 시각물을 인지하는데도 심성모형이 사용되고 있을까\ulcorner 만약 이것이 사실이라면 우리는 새로운 디자인물을 사용자가 어떻게 이해할것인지를 분석할 수 있을 것이다. 본 연구에서는 이와같은 목적으로 인간이 시각적 심성모형을 통해 새로운 대상을 이해하는지를 알기 위해 2000cc급 자동차 측면을 자극재료로 하여 A집단에는 100%의 예비자극을 보여준 후 120%로 늘려진 실험자극을 보여줬고, B집단에는 120%의 실험자극만을 보여 주는 실험을 하였다. 실험결과 A집단이 B집단보다 실험자극을 보다 길게 지각할 것이라는 연구가설이 통계적으로 유의미한 것으로 밝혀졌다. 따라서 인간이 시각물을 지각하는데 에도 심성모형을 사용한다는 것이 증명되었으며, 이와같은 결론이 산업디자인분야에 시사하는 바는 어떤 대상을 디자인하는데 소비자가 생각하고 있는 기존제품의 시각적 심성모형을 알면 새로이 제시하려는 디자인대안에 대한 정화한 이해를 할 수 있게 된다는 것이다.

  • PDF

실제 이미지에서 현저성과 맥락 정보의 영향을 고려한 시각 탐색 모델 (Visual Search Model based on Saliency and Scene-Context in Real-World Images)

  • 최윤형;오형석;명노해
    • 대한산업공학회지
    • /
    • 제41권4호
    • /
    • pp.389-395
    • /
    • 2015
  • According to much research on cognitive science, the impact of the scene-context on human visual search in real-world images could be as important as the saliency. Therefore, this study proposed a method of Adaptive Control of Thought-Rational (ACT-R) modeling of visual search in real-world images, based on saliency and scene-context. The modeling method was developed by using the utility system of ACT-R to describe influences of saliency and scene-context in real-world images. Then, the validation of the model was performed, by comparing the data of the model and eye-tracking data from experiments in simple task in which subjects search some targets in indoor bedroom images. Results show that model data was quite well fit with eye-tracking data. In conclusion, the method of modeling human visual search proposed in this study should be used, in order to provide an accurate model of human performance in visual search tasks in real-world images.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제10권1호
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.