• 제목/요약/키워드: Visual Object

검색결과 1,240건 처리시간 0.027초

물체 추적을 위한 강화된 부분공간 표현 (Enhanced Representation for Object Tracking)

  • 윤석민;유한주;최진영
    • 대한전자공학회:학술대회논문집
    • /
    • 대한전자공학회 2009년도 정보 및 제어 심포지움 논문집
    • /
    • pp.408-410
    • /
    • 2009
  • We present an efficient and robust measurement model for visual tracking. This approach builds on and extends work on subspace representations of measurement model. Subspace-based tracking algorithms have been introduced to visual tracking literature for a decade and show considerable tracking performance due to its robustness in matching. However the measures used in their measurement models are often restricted to few approaches. We propose a novel measure of object matching using Angle In Feature Space, which aims to improve the discriminability of matching in subspace. Therefore, our tracking algorithm can distinguish target from similar background clutters which often cause erroneous drift by conventional Distance From Feature Space measure. Experiments demonstrate the effectiveness of the proposed tracking algorithm under severe cluttered background.

  • PDF

Forklift 운전자의 계기판 인지성에 따른 Visual object의 layout과 위치에 관한 분석 (Analysis about visual object's layout and position by forklift driver's instrument cognitivity)

  • 정우근;박범
    • 대한안전경영과학회지
    • /
    • 제7권5호
    • /
    • pp.97-105
    • /
    • 2005
  • Achievement degree can be improved by display offering more effective process about cognitive, pattern recognition than making observers use memory, integration, and cognitive process of control. And this research is proved by several scholars' researches [4][5][7][9]. In this study, researches was conducted about cognition according to layout of object in instrument panel. To decide layout of instrument panel, Cognition value was preferentially decided about all location. And then, objects are arranged to correct position of low cognition following the inferior procedure about each location. As a result, we get conclusion that gauge location is taken in high importance order through mechanical importance degree bringing huge damage during driving forklift-truck.

로봇시스템에서 작은 마커 인식을 하기 위한 사물 감지 어텐션 모델 (Small Marker Detection with Attention Model in Robotic Applications)

  • 김민재;문형필
    • 로봇학회논문지
    • /
    • 제17권4호
    • /
    • pp.425-430
    • /
    • 2022
  • As robots are considered one of the mainstream digital transformations, robots with machine vision becomes a main area of study providing the ability to check what robots watch and make decisions based on it. However, it is difficult to find a small object in the image mainly due to the flaw of the most of visual recognition networks. Because visual recognition networks are mostly convolution neural network which usually consider local features. So, we make a model considering not only local feature, but also global feature. In this paper, we propose a detection method of a small marker on the object using deep learning and an algorithm that considers global features by combining Transformer's self-attention technique with a convolutional neural network. We suggest a self-attention model with new definition of Query, Key and Value for model to learn global feature and simplified equation by getting rid of position vector and classification token which cause the model to be heavy and slow. Finally, we show that our model achieves higher mAP than state of the art model YOLOr.

언어-기반 제로-샷 물체 목표 탐색 이동 작업들을 위한 인공지능 기저 모델들의 활용 (Utilizing AI Foundation Models for Language-Driven Zero-Shot Object Navigation Tasks)

  • 최정현;백호준;박찬솔;김인철
    • 로봇학회논문지
    • /
    • 제19권3호
    • /
    • pp.293-310
    • /
    • 2024
  • In this paper, we propose an agent model for Language-Driven Zero-Shot Object Navigation (L-ZSON) tasks, which takes in a freeform language description of an unseen target object and navigates to find out the target object in an inexperienced environment. In general, an L-ZSON agent should able to visually ground the target object by understanding the freeform language description of it and recognizing the corresponding visual object in camera images. Moreover, the L-ZSON agent should be also able to build a rich spatial context map over the unknown environment and decide efficient exploration actions based on the map until the target object is present in the field of view. To address these challenging issues, we proposes AML (Agent Model for L-ZSON), a novel L-ZSON agent model to make effective use of AI foundation models such as Large Language Model (LLM) and Vision-Language model (VLM). In order to tackle the visual grounding issue of the target object description, our agent model employs GLEE, a VLM pretrained for locating and identifying arbitrary objects in images and videos in the open world scenario. To meet the exploration policy issue, the proposed agent model leverages the commonsense knowledge of LLM to make sequential navigational decisions. By conducting various quantitative and qualitative experiments with RoboTHOR, the 3D simulation platform and PASTURE, the L-ZSON benchmark dataset, we show the superior performance of the proposed agent model.

A Novel Approach for Object Detection in Illuminated and Occluded Video Sequences Using Visual Information with Object Feature Estimation

  • Sharma, Kajal
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제4권2호
    • /
    • pp.110-114
    • /
    • 2015
  • This paper reports a novel object-detection technique in video sequences. The proposed algorithm consists of detection of objects in illuminated and occluded videos by using object features and a neural network technique. It consists of two functional modules: region-based object feature extraction and continuous detection of objects in video sequences with region features. This scheme is proposed as an enhancement of the Lowe's scale-invariant feature transform (SIFT) object detection method. This technique solved the high computation time problem of feature generation in the SIFT method. The improvement is achieved by region-based feature classification in the objects to be detected; optimal neural network-based feature reduction is presented in order to reduce the object region feature dataset with winner pixel estimation between the video frames of the video sequence. Simulation results show that the proposed scheme achieves better overall performance than other object detection techniques, and region-based feature detection is faster in comparison to other recent techniques.

제품형태 지각에 있어서 시각적 멘탈모델의 영향에 관한 연구 (A Study on the influence of Visual Mental model in human to percept product form)

  • 오해춘
    • 디자인학연구
    • /
    • 제15권1호
    • /
    • pp.407-414
    • /
    • 2002
  • 인간은 어떤 대상을 인지해 나가는 과정에서 심성모형을 이용해 보다 효율적으로 정보처리 한다. 만약 어떤 대상을 이해해 가는 과정에서 그들이 사용하고 있는 심성모형이 어떤 것인지를 알 수 있다면 그것을 어떻게 인지할 것인지를 우리는 알 수 있을 것이다. 마찬가지로 시각물을 인지하는데도 심성모형이 사용되고 있을까\ulcorner 만약 이것이 사실이라면 우리는 새로운 디자인물을 사용자가 어떻게 이해할것인지를 분석할 수 있을 것이다. 본 연구에서는 이와같은 목적으로 인간이 시각적 심성모형을 통해 새로운 대상을 이해하는지를 알기 위해 2000cc급 자동차 측면을 자극재료로 하여 A집단에는 100%의 예비자극을 보여준 후 120%로 늘려진 실험자극을 보여줬고, B집단에는 120%의 실험자극만을 보여 주는 실험을 하였다. 실험결과 A집단이 B집단보다 실험자극을 보다 길게 지각할 것이라는 연구가설이 통계적으로 유의미한 것으로 밝혀졌다. 따라서 인간이 시각물을 지각하는데 에도 심성모형을 사용한다는 것이 증명되었으며, 이와같은 결론이 산업디자인분야에 시사하는 바는 어떤 대상을 디자인하는데 소비자가 생각하고 있는 기존제품의 시각적 심성모형을 알면 새로이 제시하려는 디자인대안에 대한 정화한 이해를 할 수 있게 된다는 것이다.

  • PDF

미디어테크놀로지의 발전에 따른 시각언어와 시각테크놀로지의 고찰 (An Observation of the Visual Language and the Visual Technology according to the Media Technology)

  • 신청우
    • 디자인학연구
    • /
    • 제17권2호
    • /
    • pp.15-22
    • /
    • 2004
  • 현재의 복잡한 시각문화는 디지털 기술의 발전에 따라 이미지, 그래픽, 사진, 영화, 텔레비전 등의 영상에 따른 광범위하게 확대된 시각 세계이며, 사운드와 문자까지를 삽입하여 의미 내용을 전달하기 때문에 일반적인 언어나 문자를 넘어 정보를 전달하고 커뮤니케이션 하는 멀티미디어 적 성격을 갖는다고 할 수 있다. 이 때의 다양한 이미지들을 보는 시각은 언어와 불가분 하게 연결되며, 이미지와 시각의 상상적 질서가 문화적, 역사적으로 특정한 방식으로 구성된다고 할 수 있다. 언어는 그 시대의 사회, 문화, 역사에 따라 다르기 때문에 시각적 경험이 부분적으로라도 언어적으로 매개된다면 시각적 경험이 보편성을 갖기는 어려운 것이다. 따라서 시각 체계들간의 사회 문화적 차이를 형성하고 규정하는데 에는 언어적 질서의 역할이 크다고 할 수 있다. 이러한 시각 언어와 함께 역사적으로 다양한 시각적, 광학적 장치들 또한 많은 영향을 끼쳤는데 이 시각적 테크놀로지들은 가시적 세계 속에서 주체와 주체의 가시적인 대상들과 관계 맺는 방식을 결정하는 구체적인 물질적 실천체인 것이다. 시각언어는 이렇게 이미지들의 표상이라는 차원과 일련의 역사적인 물질적, 제도적 실천들로의 시각테크놀로지라는 차원이 결합된 것이며, 이것이 하나의 시각체제 내에서의 대상 세계를 보는 사회적인 시각 양식을 결정하였다. 따라서 본 연구는 미디어테크놀로지의 발전에 따라 변화된 개념이나 특성들에 따라 시각언어를 사회적이고 역사적인 성격을 가진 것으로 이해하고 표상의 차원으로서의 시각언어와 제도적이고 물질적인 실천으로서의 시각 테크놀로지의 차원에서 설명했다. 결국 시각 테크놀로지는 그 기능과 시각 양식에 대한 영향은 그것의 기술적 요소만으로는 설명할 수 없으며, 그것과 결부된 담론적 실천들과 물질적 제도적 실천과 분리될 수 없다. 특정 테크놀로지의 기술적 요소가 담고 있는 가능성 역시 그대로 실현되는 것이 아니라 항상 사회적 맥락에 의해 그 효과가 매개되고 제약되면서 실현된다고 할 수 있다.

  • PDF

Optical flow를 이용한 Object perception system 구성에 대한 연구 (The study on design of object perception system by optical flow)

  • 이형국;정진현
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.56-59
    • /
    • 1997
  • Vision system is mainly consist of three parts of perception, action. One of these parts, perception system detects visual target in surrounding environment. Block-based motion estimation with compensation is one of the popular approaches without accuracy. The hierarchical method the optical flow with gradient is used to improve optical flow time delay.

  • PDF

객체 추적을 위한 보틀넥 기반 Siam-CNN 알고리즘 (Bottleneck-based Siam-CNN Algorithm for Object Tracking)

  • 임수창;김종찬
    • 한국멀티미디어학회논문지
    • /
    • 제25권1호
    • /
    • pp.72-81
    • /
    • 2022
  • Visual Object Tracking is known as the most fundamental problem in the field of computer vision. Object tracking localize the region of target object with bounding box in the video. In this paper, a custom CNN is created to extract object feature that has strong and various information. This network was constructed as a Siamese network for use as a feature extractor. The input images are passed convolution block composed of a bottleneck layers, and features are emphasized. The feature map of the target object and the search area, extracted from the Siamese network, was input as a local proposal network. Estimate the object area using the feature map. The performance of the tracking algorithm was evaluated using the OTB2013 dataset. Success Plot and Precision Plot were used as evaluation matrix. As a result of the experiment, 0.611 in Success Plot and 0.831 in Precision Plot were achieved.

Image classification and captioning model considering a CAM-based disagreement loss

  • Yoon, Yeo Chan;Park, So Young;Park, Soo Myoung;Lim, Heuiseok
    • ETRI Journal
    • /
    • 제42권1호
    • /
    • pp.67-77
    • /
    • 2020
  • Image captioning has received significant interest in recent years, and notable results have been achieved. Most previous approaches have focused on generating visual descriptions from images, whereas a few approaches have exploited visual descriptions for image classification. This study demonstrates that a good performance can be achieved for both description generation and image classification through an end-to-end joint learning approach with a loss function, which encourages each task to reach a consensus. When given images and visual descriptions, the proposed model learns a multimodal intermediate embedding, which can represent both the textual and visual characteristics of an object. The performance can be improved for both tasks by sharing the multimodal embedding. Through a novel loss function based on class activation mapping, which localizes the discriminative image region of a model, we achieve a higher score when the captioning and classification model reaches a consensus on the key parts of the object. Using the proposed model, we established a substantially improved performance for each task on the UCSD Birds and Oxford Flowers datasets.