• Title/Summary/Keyword: virtual object recognition

Search Result 63, Processing Time 0.027 seconds

Object Recognition and Target Tracking Using Motion Synchronization between Virtual and Real Robots (가상로봇과 실제로봇 사이의 운동 동기화를 통한 물체 인식 및 목표물 추적방안)

  • Ahn, Hyeo Gyeong;Kang, Hyeon Jun;Kim, Jin Beom;Jung, Ji Won;Ok, Seo Won;Kim, Dong Hwan
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.26 no.1
    • /
    • pp.20-29
    • /
    • 2017
  • Motion synchronization between developed real and virtual robots for object recognition and target tracking is introduced. ASUS's XTION PRO Live is implemented as a sensor and configured to recognize walls and obstacles, and perceive objects. In order to create virtual reality, Unity 3D is adopted to be associated with the real robot, and the virtual object is controlled by using an input device. A Bluetooth serial communication module is used for wireless communication between the PC and the real robot. The motion information of a virtual object controlled by the user is sent to the robot. Then, the robot moves in the same way as the virtual object according to the motion information. Through motion synchronization, two scenarios, which map the real space and current object information with virtual objects and space, were demonstrated, yielding good agreement between the two spaces.

Realization of a two dimensional Haptic Interfacing Apparatus for Virtual Object Recognition Experiments (가상물체 인식 실험을 위한 2차원 Haptic 인터페이스 장치의 구현)

  • Lee, Joon-Cheol;Jang, Tae-Jeong
    • Journal of Industrial Technology
    • /
    • v.19
    • /
    • pp.415-421
    • /
    • 1999
  • In this paper, a 2D X-Y table, two axes of which are symmetrical, and a force sensing device are constructed, which comprise a 2D haptic interfacing apparatus. Two DC motors are used for actuating the two axes of the table and two precision encoders for sensing the position of each axis. Four PZTs are used for sensing the direction and the magnitude of the 2D force applied to the force sensing device by the user. The performance of the 2D haptic interface device is tested by 2D virtual object recognition experiments.

  • PDF

A study of user performed Virtual Space Storybook (사용자 참여 가상공간 스토리북 구현)

  • Park, Su Jin;Jung, Moon Ryul
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.115-122
    • /
    • 2019
  • We In this study, We tested for artificial intelligence-based virtual space story books were planned. The proposed virtual space concept, a story book with the characteristics of Augmented Virtuality, was implemented Several steps are needed to proceed with the virtual space storybook's story. First, a user brings a real object in to virtual space and recognizes the real object with an artificial intelligence-based object-recognition software. Second, when object recognition progresses, the virtual 3D model is augmented in the virtual space, which is then inserted into the virtual space and rendered. Finally, software projected a virtual space storybook on the desk in which users can touch and select real-objects. This virtual space storybook was implemented using the new story-making technology by applying the virtual space concept. the Augmented Virtuality concept is to augment real objects based on virtual space. To confirm this we tested a user test using the virtual space storybook. the user seemed as if can the distinction between real objects and virtual images. Also very well and that understood the process of putting the real objects in virtual space.

YOLO based Optical Music Recognition and Virtual Reality Content Creation Method (YOLO 기반의 광학 음악 인식 기술 및 가상현실 콘텐츠 제작 방법)

  • Oh, Kyeongmin;Hong, Yoseop;Baek, Geonyeong;Chun, Chanjun
    • Smart Media Journal
    • /
    • v.10 no.4
    • /
    • pp.80-90
    • /
    • 2021
  • Using optical music recognition technology based on deep learning, we propose to apply the results derived to VR games. To detect the music objects in the music sheet, the deep learning model used YOLO v5, and Hough transform was employed to detect undetected objects, modifying the size of the staff. It analyzes and uses BPM, maximum number of combos, and musical notes in VR games using output result files, and prevents the backlog of notes through Object Pooling technology for resource management. In this paper, VR games can be produced with music elements derived from optical music recognition technology to expand the utilization of optical music recognition along with providing VR contents.

In-Vehicle AR-HUD System to Provide Driving-Safety Information

  • Park, Hye Sun;Park, Min Woo;Won, Kwang Hee;Kim, Kyong-Ho;Jung, Soon Ki
    • ETRI Journal
    • /
    • v.35 no.6
    • /
    • pp.1038-1047
    • /
    • 2013
  • Augmented reality (AR) is currently being applied actively to commercial products, and various types of intelligent AR systems combining both the Global Positioning System and computer-vision technologies are being developed and commercialized. This paper suggests an in-vehicle head-up display (HUD) system that is combined with AR technology. The proposed system recognizes driving-safety information and offers it to the driver. Unlike existing HUD systems, the system displays information registered to the driver's view and is developed for the robust recognition of obstacles under bad weather conditions. The system is composed of four modules: a ground obstacle detection module, an object decision module, an object recognition module, and a display module. The recognition ratio of the driving-safety information obtained by the proposed AR-HUD system is about 73%, and the system has a recognition speed of about 15 fps for both vehicles and pedestrians.

Research on Intelligent Anomaly Detection System Based on Real-Time Unstructured Object Recognition Technique (실시간 비정형객체 인식 기법 기반 지능형 이상 탐지 시스템에 관한 연구)

  • Lee, Seok Chang;Kim, Young Hyun;Kang, Soo Kyung;Park, Myung Hye
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.546-557
    • /
    • 2022
  • Recently, the demand to interpret image data with artificial intelligence in various fields is rapidly increasing. Object recognition and detection techniques using deep learning are mainly used, and video integration analysis to determine unstructured object recognition is a particularly important problem. In the case of natural disasters or social disasters, there is a limit to the object recognition structure alone because it has an unstructured shape. In this paper, we propose intelligent video integration analysis system that can recognize unstructured objects based on video turning point and object detection. We also introduce a method to apply and evaluate object recognition using virtual augmented images from 2D to 3D through GAN.

Virtual Block Game Interface based on the Hand Gesture Recognition (손 제스처 인식에 기반한 Virtual Block 게임 인터페이스)

  • Yoon, Min-Ho;Kim, Yoon-Jae;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.17 no.6
    • /
    • pp.113-120
    • /
    • 2017
  • With the development of virtual reality technology, in recent years, user-friendly hand gesture interface has been more studied for natural interaction with a virtual 3D object. Most earlier studies on the hand-gesture interface are using relatively simple hand gestures. In this paper, we suggest an intuitive hand gesture interface for interaction with 3D object in the virtual reality applications. For hand gesture recognition, first of all, we preprocess various hand data and classify the data through the binary decision tree. The classified data is re-sampled and converted to the chain-code, and then constructed to the hand feature data with the histograms of the chain code. Finally, the input gesture is recognized by MCSVM-based machine learning from the feature data. To test our proposed hand gesture interface we implemented a 'Virtual Block' game. Our experiments showed about 99.2% recognition ratio of 16 kinds of command gestures and more intuitive and user friendly than conventional mouse interface.

Recognition method of multiple objects for virtual touch using depth information (깊이 정보를 이용한 가상 터치에서 다중 객체 인식 방법)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.27-34
    • /
    • 2016
  • In this paper, we propose how to recognize a multi-touch in the virtual touch type. Virtual touch has an advantage that it is installed only simple depth camera compared to the physical touch manners and it can be implemented with low cost for extracting an object exactly from only the difference of the depth values between the object and background. However, the accuracy for implementing the multi-touch has lowered. This paper presents a method to increase the accuracy of the multi-touch through the algorithms of binarization, labelling, and object tracking for multi-object recognition. Simulation results show that the proposed method can provide a variety of multi-touch events.

Explosion Casting: An Efficient Selection Method for Overlapped Virtual Objects in Immersive Virtual Environments (몰입 가상현실 환경에서 겹쳐진 가상객체들의 효율적인 선택을 위한 펼침 시각화를 통한 객체 선택 방법)

  • Oh, JuYoung;Lee, Jun
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.3
    • /
    • pp.11-18
    • /
    • 2018
  • To interact with a virtual object in immersive virtual environment, the target object should be selected quickly and accurately. Conventional 3D ray casting method using a direction of user's hand or head allows the user to select an object quickly. However, accuracy problem occurs when selecting an object using conventional methods among occlusion of objects. In this paper, we propose a region of interest based selection method that enables to select an object among occlusion of objects using a combination of gaze tracking and hand gesture recognition. When a user looks at a group of occlusion of objects, the proposed method recognizes user's gaze input, and then region of interest is set by gaze input. If the user wants to select an object among them, the user gives an activation hand gesture. Then, the proposed system relocates and visualizes all objects on a virtual active window. The user can select an object by a selecting hand gesture. Our experiment verified that the user can select an object correctly and accurately.

Interaction with Agents in the Virtual Space Combined by Recognition of Face Direction and Hand Gestures (얼굴 방향과 손 동작 인식을 통합한 가상 공간에 존재하는 Agent들과의 상호 작용)

  • Jo, Gang-Hyeon;Kim, Seong-Eun;Lee, In-Ho
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.39 no.3
    • /
    • pp.62-78
    • /
    • 2002
  • In this paper, we describe a system that can interact with agents in the virtual space incorporated in the system. This system is constructed by an analysis system for analyzing human gesture and an interact system for interacting with agents in the virtual space using analyzed information. An implemented analysis system for analyzing gesture extracts a head and hands region after taking image sequence of an operator's continuous behavior using CCD cameras. In interact system, we construct the virtual space that exist an avatar which incarnating operator himself, an autonomous object (like a Puppy), and non-autonomous objects which are table, door, window and object. Recognized gesture is transmitted to the avatar in the virtual space, then transit to next state based on state transition diagram. State transition diagram is represented in a graph in which each state represented as node and connect with link. In the virtual space, the agent link an avatar can open and close a window and a door, grab or move an object like a ball, order a puppy to do and respond to the Puppy's behavior as does the puppy.