• Title/Summary/Keyword: virtual sensors

Search Result 258, Processing Time 0.024 seconds

Introducing Depth Camera for Spatial Interaction in Augmented Reality (증강현실 기반의 공간 상호작용을 위한 깊이 카메라 적용)

  • Yun, Kyung-Dahm;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.62-67
    • /
    • 2009
  • Many interaction methods for augmented reality has attempted to reduce difficulties in tracking of interaction subjects by either allowing a limited set of three dimensional input or relying on auxiliary devices such as data gloves and paddles with fiducial markers. We propose Spatial Interaction (SPINT), a noncontact passive method that observes an occupancy state of the spaces around target virtual objects for interpreting user input. A depth-sensing camera is introduced for constructing the virtual space sensors, and then manipulating the augmented space for interaction. The proposed method does not require any wearable device for tracking user input, and allow versatile interaction types. The depth perception anomaly caused by an incorrect occlusion between real and virtual objects is also minimized for more precise interaction. The exhibits of dynamic contents such as Miniature AR System (MINARS) could benefit from this fluid 3D user interface.

  • PDF

Practical Node Deployment Scheme Based on Virtual Force for Wireless Sensor Networks in Complex Environment

  • Lu, Wei;Yang, Yuwang;Zhao, Wei;Wang, Lei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.3
    • /
    • pp.990-1013
    • /
    • 2015
  • Deploying sensors into a target region is a key issue to be solved in building a wireless sensor network. Various deployment algorithms have been proposed by the researchers, and most of them are evaluated under the ideal conditions. Therefore, they cannot reflect the real environment encountered during the deployment. Moreover, it is almost impossible to evaluate an algorithm through practical deployment. Because the deployment of sensor networks require a lot of nodes, and some deployment areas are dangerous for human. This paper proposes a deployment approach to solve the problems mentioned above. Our approach relies on the satellite images and the Virtual Force Algorithm (VFA). It first extracts the topography and elevation information of the deployment area from the high resolution satellite images, and then deploys nodes on them with an improved VFA. The simulation results show that the coverage rate of our method is approximately 15% higher than that of the classical VFA in complex environment.

Generating a Ball Sport Scene in a Virtual Environment

  • Choi, Jongin;Kim, Sookyun;Kim, Sunjeong;Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5512-5526
    • /
    • 2019
  • In sports video games, especially ball games, motion capture techniques are used to reproduce the ball-driven performances. The amount of motion data needed to create different situations in which athletes exchange balls is bound to increase exponentially with resolution. This paper proposes how avatars in virtual worlds can not only imitate professional athletes in ball games, but also create and edit their actions effectively. First, various ball-handling movements are recorded using motion sensors. We do not really have to control an actual ball; imitating the motions is enough. Next, motion is created by specifying what to pass the ball through, and then making motion to handle the ball in front of the motion sensor. The ball's occupant then passes the ball to the user-specified target through a motion that imitates the user's, and the process is repeated. The method proposed can be used as a convenient user interface for motion based games for players who handle balls.

Data-driven Adaptive Safety Monitoring Using Virtual Subjects in Medical Cyber-Physical Systems: A Glucose Control Case Study

  • Chen, Sanjian;Sokolsky, Oleg;Weimer, James;Lee, Insup
    • Journal of Computing Science and Engineering
    • /
    • v.10 no.3
    • /
    • pp.75-84
    • /
    • 2016
  • Medical cyber-physical systems (MCPS) integrate sensors, actuators, and software to improve patient safety and quality of healthcare. These systems introduce major challenges to safety analysis because the patient's physiology is complex, nonlinear, unobservable, and uncertain. To cope with the challenge that unidentified physiological parameters may exhibit short-term variances in certain clinical scenarios, we propose a novel run-time predictive safety monitoring technique that leverages a maximal model coupled with online training of a computational virtual subject (CVS) set. The proposed monitor predicts safety-critical events at run-time using only clinically available measurements. We apply the technique to a surgical glucose control case study. Evaluation on retrospective real clinical data shows that the algorithm achieves 96% sensitivity with a low average false alarm rate of 0.5 false alarm per surgery.

A Study on the Standard-interfaced Smart Farm Supporting Non-Standard Sensor and Actuator Nodes (비표준 센서 및 구동기 노드를 지원하는 표준사양 기반 스마트팜 연구)

  • Bang, Dae Wook
    • Journal of Information Technology Services
    • /
    • v.19 no.3
    • /
    • pp.139-149
    • /
    • 2020
  • There are now many different commercial weather sensors suitable for smart farms, and various smart farm devices are being developed and distributed by companies participating in the government-led smart farm expansion project. However, most do not comply with standard specifications and are therefore limited to use in smart farms. This paper proposed the connecting structure of operating non-standard node devices in smart farms following standard specifications supporting smart greenhouse. This connecting structure was proposed as both a virtual node module method and a virtual node wrapper method. In addition, the SoftFarm2.0 system was experimentally operated to analyze the performance of the implementation of the two methods. SoftFarm2.0 system complies with the standard specifications and supports non-standard smart farm devices. According to the analysis results, both methods do not significantly affect performance in the operation of the smart farm. Therefore, it would be good to select and implement the method suitable for each non-standard smart farm device considering environmental constraints such as power, space, distance of communication between the gateway and the node of the smart farm, and software openness. This will greatly contribute to the spread of smart farms by maximizing deployment cost savings.

Virtual Sensor Verification Using Neural Network Theory of the Quadruped Robot (보행로봇의 신경망 이론을 이용한 가상센서 검증)

  • Ko, Kwang-Jin;Kim, Wan-Soo;Yu, Seung-Nam;Han, Chang-Soo
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.33 no.11
    • /
    • pp.1326-1331
    • /
    • 2009
  • The sensor data measured by the legged robot are used to recognize the physical environment or information that controls the robot's posture. Therefore, a robot's ambulation can be advanced with the use of such sensing information. For the precise control of a robot, highly accurate sensor data are required, but most sensors are expensive and are exposed to excessive load operation in the field. The seriousness of these problems will be seen if the prototype's practicality and mass productivity, which are closely related to the unit cost of production and maintenance, will be considered. In this paper, the use of a virtual sensor technology was suggested to address the aforementioned problems, and various ways of applying the theory to a walking robot obtained through training with an actual sensor, and of various hardware information, were presented. Finally, the possibility of the replacement of the ground reaction force sensor of legged robot was verified.

An Adaptive and Real-Time System for the Analysis and Design of Underground Constructions

  • Gutierrez, Marte
    • Geotechnical Engineering
    • /
    • v.26 no.9
    • /
    • pp.33-47
    • /
    • 2010
  • Underground constructions continue to provide challenges to Geotechnical Engineers yet they pose the best opportunities for development and deployment of advance technologies for analysis, design and construction. The reason for this is that, by virtue of the nature of underground constructions, more data and information on ground characteristics and response become available as the construction progresses. However, due to several barriers, these data and information are rarely, if ever, utilized to modify and improve project design and construction during the construction stage. To enable the use of evolving realtime data and information, and adaptively modify and improve design and construction, the paper presents an analysis and design system, called AMADEUS, for underground projects. AMADEUS stands for Adaptive, real-time and geologic Mapping, Analysis and Design of Underground Space. AMADEUS relies on recent advances in IT (Information Technology), particularly in digital imaging, data management, visualization and computation to significantly improve analysis, design and construction of underground projects. Using IT and remote sensors, real-time data on geology and excavation response are gathered during the construction using non-intrusive techniques which do not require expensive and time-consuming monitoring. The real-time data are then used to update geological and geomechanical models of the excavation, and to determine the optimal, construction sequences and stages, and structural support. Virtual environment (VE) systems are employed to allow virtual walk-throughs inside an excavation, observe geologic conditions, perform virtual construction operations, and investigate stability of the excavation via computer simulation to steer the next stages of construction.

  • PDF

Research on Cognitive Effects and Responsiveness of Smartphone-based Augmented Reality Navigation (스마트폰 증강현실 내비게이션의 인지능력과 호응도에 관한 연구)

  • Sohn, Min Gook;Lee, Seung Tae;Lee, Jae Yeol
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.3
    • /
    • pp.272-280
    • /
    • 2014
  • Most of the car navigation systems pzrovide 2D or 3D virtual map-based driving guidance. One of the important issues is how to reduce cognitive burden to the driver who should interpret the abstracted information to real world driving information. Recently, an augmented reality (AR)-based navigation is considered as a new way to reduce cognitive workload by superimposing guidance information into the real world scene captured by the camera. In particular, head-up display (HUD) is popular to implement AR navigation. However, HUD is too expensive to be set up in most cars so that the HUD-based AR navigation is currently unrealistic for navigational assistance. Meanwhile, smartphones with advanced computing capability and various sensors are popularized and also provide navigational assistance. This paper presents a research on cognitive effect and responsiveness of an AR navigation by a comparative study with a conventional virtual map-based navigation on the same smartphone. This paper experimented both quantitative and qualitative studies to compare cognitive workload and responsiveness, respectively. The number of eye gazing at the navigation system is used to measure the cognitive effect. In addition, questionnaires are used for qualitative analysis of the responsiveness.

Study on Scent Media Service in Virtual Reality (발향장치를 이용한 가상현실에서의 향 미디어 서비스)

  • Yu, Ok Hwan;Kim, Min Ku;Kim, Jeong-Do
    • Journal of Sensor Science and Technology
    • /
    • v.27 no.6
    • /
    • pp.414-420
    • /
    • 2018
  • To augment emotion and immersion in virtual reality (VR), technological research based on scent displays have increased in recent years. The results of extensive studies have enabled the development of methods to interface head mounted displays (HMDs) with scent devices, and the possibility of VR applications of this development was identified via several demonstrations in actual VR environments. Despite all these efforts, more practical methods and conditions for scent display in VR environments are yet to be developed. To efficiently interface VR and scent, this study proposes three ways to set the position for scent display and scent conditions. The first is scent display using local positioning in the VR engine, the second is scent display using the relative distance and orientation between user and object in VR environments, and the third is scent display using time setting. In this study, we developed scent devices using a piezo actuator to validate the proposed method and successfully conducted demonstrations and experiments.

Autonomous-Driving Vehicle Learning Environments using Unity Real-time Engine and End-to-End CNN Approach (유니티 실시간 엔진과 End-to-End CNN 접근법을 이용한 자율주행차 학습환경)

  • Hossain, Sabir;Lee, Deok-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.2
    • /
    • pp.122-130
    • /
    • 2019
  • Collecting a rich but meaningful training data plays a key role in machine learning and deep learning researches for a self-driving vehicle. This paper introduces a detailed overview of existing open-source simulators which could be used for training self-driving vehicles. After reviewing the simulators, we propose a new effective approach to make a synthetic autonomous vehicle simulation platform suitable for learning and training artificial intelligence algorithms. Specially, we develop a synthetic simulator with various realistic situations and weather conditions which make the autonomous shuttle to learn more realistic situations and handle some unexpected events. The virtual environment is the mimics of the activity of a genuine shuttle vehicle on a physical world. Instead of doing the whole experiment of training in the real physical world, scenarios in 3D virtual worlds are made to calculate the parameters and training the model. From the simulator, the user can obtain data for the various situation and utilize it for the training purpose. Flexible options are available to choose sensors, monitor the output and implement any autonomous driving algorithm. Finally, we verify the effectiveness of the developed simulator by implementing an end-to-end CNN algorithm for training a self-driving shuttle.