• Title/Summary/Keyword: Visual Object

Search Result 1,236, Processing Time 0.024 seconds

The development of a visual tracking system for the stable grasping of a moving object (움직이는 물체의 안정한 Grasping을 위한 시각추적 시스템 개발)

  • 차인혁;손영갑;한창수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.543-546
    • /
    • 1996
  • We propose a new visual tracking system for grasping which can find grasping points of an unknown polygonal object. We construct the system with the image prediction technique and Extended Kalman Filter algorithm. The Extended Kalman Filter(EKF) based on the SVD can improve the accuracy and processing time for the estimation of the nonlinear state variables. By using it, we can solve the numerical unstability problem that can occur in the visual tracking system based on Kalman filter. The image prediction algorithm can reduce the effect of noise and the image processing time. In the processing of a visual tracking, we can construct the parameterized family and can found the grasping points of unknown object through the geometric properties of the parameterized family.

  • PDF

Resolution-enhanced Reconstruction of 3D Object Using Depth-reversed Elemental Images for Partially Occluded Object Recognitionz

  • Wei, Tan-Chun;Shin, Dong-Hak;Lee, Byung-Gook
    • Journal of the Optical Society of Korea
    • /
    • v.13 no.1
    • /
    • pp.139-145
    • /
    • 2009
  • Computational integral imaging (CII) is a new method for 3D imaging and visualization. However, it suffers from seriously poor image quality of the reconstructed image as the reconstructed image plane increases. In this paper, to overcome this problem, we propose a CII method based on a smart pixel mapping (SPM) technique for partially occluded 3D object recognition, in which the object to be recognized is located at far distance from the lenslet array. In the SPM-based CII, the use of SPM moves a far 3D object toward the near lenslet array and then improves the image quality of the reconstructed image. To show the usefulness of the proposed method, we carry out some experiments for occluded objects and present the experimental results.

Improving Visual Object Query language (VOQL) by Introducing Visual Elements and visual Variables (시각 요소와 시각 변수를 통한 시각 객체 질의어(VOQL)의 개선)

  • Lee, Seok-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.6 no.6
    • /
    • pp.1447-1457
    • /
    • 1999
  • Visual Object Query language(VOQL) proposed recently is a visual object-oriented database query language which can effectively represent queries on complex structured data, since schema information is visually included in query expressions. VOQL, which is a graph-based query language with inductively defined semantics, can concisely represent various text-based path expressions by graph, and clearly convey the semantics of complex path expressions. however, the existing VOQL assumes that all the attributes are multi-valued, and cannot visualize the concept of binding of object variables. therefore, VPAL query expressions are not intuitive, so that it is difficult to extend the existing VOQL theoretically. In this paper, we propose VOQL that improved on these problems. The improved VOQL visualizes the result of a single-valued attribute and that of a multi-valued attribute as a visual element and a subblob, respectively, and specifies the binding of object variables by introducing visual variables, so that the improved VOQL intuitively and clearly represents the semantics of queries.

  • PDF

Development of a Visual Simulation Tool for Object Behavior Chart based on LOTOS Formalism (객체행위챠트를 위한 LOTOS 정형기법 기반 시각적 시뮬레이션 도구의 개발)

  • Lee, Gwang-Yong;O, Yeong-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.5 no.5
    • /
    • pp.595-610
    • /
    • 1999
  • This paper presents a visual simulation tool for verification and validation(V&V) of design implications of the Object Behavior Chart developed in accordance with the existing real-time object's behavior design method. This tool can simulates the dynamic interactions using the executable simulation machine, that is EFSM(Extended Finite State Machine) and can detect various logical and temporal errors in the visual object behavior charts before a concrete implementation is made. For this, a LOTOS prototype specification is automatically generated from the visual Object Behavior Chart, and is translated into an EFSM. This system is implemented in Visual C++ version 4.2 and currently runs on PC Windows 95 environment. For simulation purpose, LOTOS was chosen because of it's excellence in specifying communication protocols. Our research contributes to the support tools for seamlessly integrating methodology-based graphical models and formal-based simulation techniques, and also contributes to the automated V&V of the Visual Models.

A Study on the Relationship of Space and Time in Visual Tactility (시각과 시촉각에 의한 운동 측면에서 본 공간과 시간의 관계성 연구 - 연경당 외부공간을 중심으로 -)

  • Yook, Ok-Soo
    • Journal of architectural history
    • /
    • v.20 no.1
    • /
    • pp.77-93
    • /
    • 2011
  • Across the culture of Western Europe, dichotomy based on the visual sense has evolved. They believed eyes and ears requiring a distance related in recognition, are more developed than any other human senses in human body. Dominant position, as a condition to using a perspective, the eye has been just concentrated in the development of optical sight. But developed a variety of modern media, highlighting the importance of the other perception, it makes dichotomy to the expansion of perception over the single function of visuality. Recently, Guille Deleuze and Merleau-Ponty try to recover the sense of tactility segregated in skin from body keeping eyes for distance. By the result, the activity can be happened by being connected to the body rather than to eye in the space between the subject and object. From the phase of recognition where the human body tries to identify the object in the space considering a time, it will be changed for the subject to the phase of structure vice versa. Visual tactility is to eliminate the distance between subject and object. If the visual tactility is to erase the distance different from the visual in dichotomy, it will be occurred to having a tension and makes new relationship to work trying to move the subjective point of view in object. Like this evidence in analysis of architecture, it can be easy to find the Korean architecture rather than western architecture in terms of emphasizing the time and space. The fact, architecture of Lee Dynasty had been preserved and consisted basic form and style over the centuries makes us assume that visual tactility was considered as well as the visual sense. This study will be intensive in terms of visual and tactile inherent in the subject and how it is being connected to the movement in the space and time.

A Collaborative Visual Language

  • Kim, Kyung-Deok
    • Journal of information and communication convergence engineering
    • /
    • v.1 no.2
    • /
    • pp.74-81
    • /
    • 2003
  • There are many researches on visual languages, but the most of them are difficult to support various collaborative interactions on a distributed multimedia environment. So, this paper suggests a collaborative visual language for interaction between multi-users. The visual language can describe a conceptual model for collaborative interactions between multi-users. Using the visual language, generated visual sentences consist of object icons and interaction operators. An object icon represents a user who is responsible for a collaborative activity, has dynamic attributes of a user, and supports flexible interaction between multi-users. An interaction operator represents an interactive relation between multi-users and supports various collaborative interactions. Merits of the visual language are as follows: supporting of both asynchronous interaction and synchronous interaction, supporting flexible interaction between multi-users according to participation or leave of users, supporting a user oriented modeling, etc. For example, an application to a workflow system for document approval is illustrated. So we could be found that the visual language shows a collaborative interaction.

Reinforced Feature of Dynamic Search Area for the Discriminative Model Prediction Tracker based on Multi-domain Dataset (다중 도메인 데이터 기반 구별적 모델 예측 트레커를 위한 동적 탐색 영역 특징 강화 기법)

  • Lee, Jun Ha;Won, Hong-In;Kim, Byeong Hak
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.6
    • /
    • pp.323-330
    • /
    • 2021
  • Visual object tracking is a challenging area of study in the field of computer vision due to many difficult problems, including a fast variation of target shape, occlusion, and arbitrary ground truth object designation. In this paper, we focus on the reinforced feature of the dynamic search area to get better performance than conventional discriminative model prediction trackers on the condition when the accuracy deteriorates since low feature discrimination. We propose a reinforced input feature method shown like the spotlight effect on the dynamic search area of the target tracking. This method can be used to improve performances for deep learning based discriminative model prediction tracker, also various types of trackers which are used to infer the center of the target based on the visual object tracking. The proposed method shows the improved tracking performance than the baseline trackers, achieving a relative gain of 38% quantitative improvement from 0.433 to 0.601 F-score at the visual object tracking evaluation.

Background memory-assisted zero-shot video object segmentation for unmanned aerial and ground vehicles

  • Kimin Yun;Hyung-Il Kim;Kangmin Bae;Jinyoung Moon
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.795-810
    • /
    • 2023
  • Unmanned aerial vehicles (UAV) and ground vehicles (UGV) require advanced video analytics for various tasks, such as moving object detection and segmentation; this has led to increasing demands for these methods. We propose a zero-shot video object segmentation method specifically designed for UAV and UGV applications that focuses on the discovery of moving objects in challenging scenarios. This method employs a background memory model that enables training from sparse annotations along the time axis, utilizing temporal modeling of the background to detect moving objects effectively. The proposed method addresses the limitations of the existing state-of-the-art methods for detecting salient objects within images, regardless of their movements. In particular, our method achieved mean J and F values of 82.7 and 81.2 on the DAVIS'16, respectively. We also conducted extensive ablation studies that highlighted the contributions of various input compositions and combinations of datasets used for training. In future developments, we will integrate the proposed method with additional systems, such as tracking and obstacle avoidance functionalities.

A Study On Parameter Measurement for Artificial Intelligence Object Recognition (인공지능 객체인식에 관한 파라미터 측정 연구)

  • Choi, Byung Kwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.15 no.3
    • /
    • pp.15-28
    • /
    • 2019
  • Artificial intelligence is evolving rapidly in the ICT field, smart convergence media system and content industry through the fourth industrial revolution, and it is evolving very rapidly through Big Data. In this paper, we propose a face recognition method based on object recognition based on object recognition through artificial intelligence. In this method, Were experimented and studied through the object recognition technique of artificial intelligence. In the conventional 3D image field, general research on object recognition has been carried out variously, and researches have been conducted on the side effects of visual fatigue and dizziness through 3D image. However, in this study, we tried to solve the problem caused by the quantitative difference between object recognition and object recognition for human factor algorithm that measure visual fatigue through cognitive function, morphological analysis and object recognition. Especially, The new method of computer interaction is presented and the results are shown through experiments.

Object Tracking with Sparse Representation based on HOG and LBP Features

  • Boragule, Abhijeet;Yeo, JungYeon;Lee, GueeSang
    • International Journal of Contents
    • /
    • v.11 no.3
    • /
    • pp.47-53
    • /
    • 2015
  • Visual object tracking is a fundamental problem in the field of computer vision, as it needs a proper model to account for drastic appearance changes that are caused by shape, textural, and illumination variations. In this paper, we propose a feature-based visual-object-tracking method with a sparse representation. Generally, most appearance-based models use the gray-scale pixel values of the input image, but this might be insufficient for a description of the target object under a variety of conditions. To obtain the proper information regarding the target object, the following combination of features has been exploited as a corresponding representation: First, the features of the target templates are extracted by using the HOG (histogram of gradient) and LBPs (local binary patterns); secondly, a feature-based sparsity is attained by solving the minimization problems, whereby the target object is represented by the selection of the minimum reconstruction error. The strengths of both features are exploited to enhance the overall performance of the tracker; furthermore, the proposed method is integrated with the particle-filter framework and achieves a promising result in terms of challenging tracking videos.