• Title/Summary/Keyword: Visual Object

Search Result 1,233, Processing Time 0.024 seconds

Development of a Robot's Visual System for Measuring Distance and Width of Object Algorism (로봇의 시각시스템을 위한 물체의 거리 및 크기측정 알고리즘 개발)

  • Kim, Hoi-In;Kim, Gab-Soon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.88-92
    • /
    • 2011
  • This paper looks at the development of the visual system of robots, and the development of image processing algorism to measure the size of an object and the distance from robot to an object for the visual system. Robots usually get the visual systems with a camera for measuring the size of an object and the distance to an object. The visual systems are accurately impossible the size and distance in case of that the locations of the systems is changed and the objects are not on the ground. Thus, in this paper, we developed robot's visual system to measure the size of an object and the distance to an object using two cameras and two-degree robot mechanism. And, we developed the image processing algorism to measure the size of an object and the distance from robot to an object for the visual system, and finally, carried out the characteristics test of the developed visual system. As a result, it is thought that the developed system could accurately measure the size of an object and the distance to an object.

Local and Global Information Exchange for Enhancing Object Detection and Tracking

  • Lee, Jin-Seok;Cho, Shung-Han;Oh, Seong-Jun;Hong, Sang-Jin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.6 no.5
    • /
    • pp.1400-1420
    • /
    • 2012
  • Object detection and tracking using visual sensors is a critical component of surveillance systems, which presents many challenges. This paper addresses the enhancement of object detection and tracking via the combination of multiple visual sensors. The enhancement method we introduce compensates for missed object detection based on the partial detection of objects by multiple visual sensors. When one detects an object or more visual sensors, the detected object's local positions transformed into a global object position. Local and global information exchange allows a missed local object's position to recover. However, the exchange of the information may degrade the detection and tracking performance by incorrectly recovering the local object position, which propagated by false object detection. Furthermore, local object positions corresponding to an identical object can transformed into nonequivalent global object positions because of detection uncertainty such as shadows or other artifacts. We improved the performance by preventing the propagation of false object detection. In addition, we present an evaluation method for the final global object position. The proposed method analyzed and evaluated using case studies.

Object Classification based on Weakly Supervised E2LSH and Saliency map Weighting

  • Zhao, Yongwei;Li, Bicheng;Liu, Xin;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.1
    • /
    • pp.364-380
    • /
    • 2016
  • The most popular approach in object classification is based on the bag of visual-words model, which has several fundamental problems that restricting the performance of this method, such as low time efficiency, the synonym and polysemy of visual words, and the lack of spatial information between visual words. In view of this, an object classification based on weakly supervised E2LSH and saliency map weighting is proposed. Firstly, E2LSH (Exact Euclidean Locality Sensitive Hashing) is employed to generate a group of weakly randomized visual dictionary by clustering SIFT features of the training dataset, and the selecting process of hash functions is effectively supervised inspired by the random forest ideas to reduce the randomcity of E2LSH. Secondly, graph-based visual saliency (GBVS) algorithm is applied to detect the saliency map of different images and weight the visual words according to the saliency prior. Finally, saliency map weighted visual language model is carried out to accomplish object classification. Experimental results datasets of Pascal 2007 and Caltech-256 indicate that the distinguishability of objects is effectively improved and our method is superior to the state-of-the-art object classification methods.

Utilization of Visual Context for Robust Object Recognition in Intelligent Mobile Robots (지능형 이동 로봇에서 강인 물체 인식을 위한 영상 문맥 정보 활용 기법)

  • Kim, Sung-Ho;Kim, Jun-Sik;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.1
    • /
    • pp.36-45
    • /
    • 2006
  • In this paper, we introduce visual contexts in terms of types and utilization methods for robust object recognition with intelligent mobile robots. One of the core technologies for intelligent robots is visual object recognition. Robust techniques are strongly required since there are many sources of visual variations such as geometric, photometric, and noise. For such requirements, we define spatial context, hierarchical context, and temporal context. According to object recognition domain, we can select such visual contexts. We also propose a unified framework which can utilize the whole contexts and validates it in real working environment. Finally, we also discuss the future research directions of object recognition technologies for intelligent robots.

  • PDF

Robust Position Tracking for Position-Based Visual Servoing and Its Application to Dual-Arm Task (위치기반 비주얼 서보잉을 위한 견실한 위치 추적 및 양팔 로봇의 조작작업에의 응용)

  • Kim, Chan-O;Choi, Sung;Cheong, Joo-No;Yang, Gwang-Woong;Kim, Hong-Seo
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.129-136
    • /
    • 2007
  • This paper introduces a position-based robust visual servoing method which is developed for operation of a human-like robot with two arms. The proposed visual servoing method utilizes SIFT algorithm for object detection and CAMSHIFT algorithm for object tracking. While the conventional CAMSHIFT has been used mainly for object tracking in a 2D image plane, we extend its usage for object tracking in 3D space, by combining the results of CAMSHIFT for two image plane of a stereo camera. This approach shows a robust and dependable result. Once the robot's task is defined based on the extracted 3D information, the robot is commanded to carry out the task. We conduct several position-based visual servoing tasks and compare performances under different conditions. The results show that the proposed visual tracking algorithm is simple but very effective for position-based visual servoing.

  • PDF

Bag of Visual Words Method based on PLSA and Chi-Square Model for Object Category

  • Zhao, Yongwei;Peng, Tianqiang;Li, Bicheng;Ke, Shengcai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.7
    • /
    • pp.2633-2648
    • /
    • 2015
  • The problem of visual words' synonymy and ambiguity always exist in the conventional bag of visual words (BoVW) model based object category methods. Besides, the noisy visual words, so-called "visual stop-words" will degrade the semantic resolution of visual dictionary. In view of this, a novel bag of visual words method based on PLSA and chi-square model for object category is proposed. Firstly, Probabilistic Latent Semantic Analysis (PLSA) is used to analyze the semantic co-occurrence probability of visual words, infer the latent semantic topics in images, and get the latent topic distributions induced by the words. Secondly, the KL divergence is adopt to measure the semantic distance between visual words, which can get semantically related homoionym. Then, adaptive soft-assignment strategy is combined to realize the soft mapping between SIFT features and some homoionym. Finally, the chi-square model is introduced to eliminate the "visual stop-words" and reconstruct the visual vocabulary histograms. Moreover, SVM (Support Vector Machine) is applied to accomplish object classification. Experimental results indicated that the synonymy and ambiguity problems of visual words can be overcome effectively. The distinguish ability of visual semantic resolution as well as the object classification performance are substantially boosted compared with the traditional methods.

A New Feature-Based Visual SLAM Using Multi-Channel Dynamic Object Estimation (다중 채널 동적 객체 정보 추정을 통한 특징점 기반 Visual SLAM)

  • Geunhyeong Park;HyungGi Jo
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.65-71
    • /
    • 2024
  • An indirect visual SLAM takes raw image data and exploits geometric information such as key-points and line edges. Due to various environmental changes, SLAM performance may decrease. The main problem is caused by dynamic objects especially in highly crowded environments. In this paper, we propose a robust feature-based visual SLAM, building on ORB-SLAM, via multi-channel dynamic objects estimation. An optical flow and deep learning-based object detection algorithm each estimate different types of dynamic object information. Proposed method incorporates two dynamic object information and creates multi-channel dynamic masks. In this method, information on actually moving dynamic objects and potential dynamic objects can be obtained. Finally, dynamic objects included in the masks are removed in feature extraction part. As a results, proposed method can obtain more precise camera poses. The superiority of our ORB-SLAM was verified to compared with conventional ORB-SLAM by the experiment using KITTI odometry dataset.

Small Object Segmentation Based on Visual Saliency in Natural Images

  • Manh, Huynh Trung;Lee, Gueesang
    • Journal of Information Processing Systems
    • /
    • v.9 no.4
    • /
    • pp.592-601
    • /
    • 2013
  • Object segmentation is a challenging task in image processing and computer vision. In this paper, we present a visual attention based segmentation method to segment small sized interesting objects in natural images. Different from the traditional methods, we first search the region of interest by using our novel saliency-based method, which is mainly based on band-pass filtering, to obtain the appropriate frequency. Secondly, we applied the Gaussian Mixture Model (GMM) to locate the object region. By incorporating the visual attention analysis into object segmentation, our proposed approach is able to narrow the search region for object segmentation, so that the accuracy is increased and the computational complexity is reduced. The experimental results indicate that our proposed approach is efficient for object segmentation in natural images, especially for small objects. Our proposed method significantly outperforms traditional GMM based segmentation.

VOQL : A Visual Object Query Language (Stochastic VOQL : 시각적 객체 질의어)

  • Kim, Jeong-Hee;Cho, Wan-Sup;Lee, Suk-Kyoon;Whang, Kyung-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.38 no.5
    • /
    • pp.1-15
    • /
    • 2001
  • Expressing complex query conditions in a concise and intuitive way has been a challenge in the design of visual object-oriented query languages. We propose a visual query language called VOQL (Visual Object oriented Query Language) for object oriented databases. By employing the visual notation of graph and Venn diagram, the database schema and the advanced features of object oriented queries such as multi-valued path expressions and quantifiers can be represented in a simple way. VOQL has such good features as simple and intuitive syntax, well-defined semantics, and excellent expressive power of object-oriented queries compared with previous visual object-oriented query languages.

  • PDF

An Advanced Visual Tracking and Stable Grasping Algorithm for a Moving Object (시각센서를 이용한 움직이는 물체의 추적 및 안정된 파지를 위한 알고리즘의 개발)

  • 차인혁;손영갑;한창수
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.6
    • /
    • pp.175-182
    • /
    • 1998
  • An advanced visual tracking and stable grasping algorithm for a moving object is proposed. The stable grasping points for a moving 2D polygonal object are obtained through the visual tracking system with the Kalman filter and image prediction technique. The accuracy and efficiency are improved more than any other prediction algorithms for the tracking of an object. In the processing of a visual tracking. the shape predictors construct the parameterized family and grasp planner find the grasping points of unknown object through the geometric properties of the parameterized family. This algorithm conducts a process of ‘stable grasping and real time tracking’.

  • PDF