• Title/Summary/Keyword: camera interface

Search Result 409, Processing Time 0.023 seconds

Characterization of stacked geotextile tube structure using digital image correlation

  • Dong-Ju Kim;Dong Geon Son;Jong-Sub Lee;Thomas H.-K. Kang;Tae Sup Yun;Yong-Hoon Byun
    • Computers and Concrete
    • /
    • v.31 no.5
    • /
    • pp.385-394
    • /
    • 2023
  • Displacement is an important element for evaluating the stability and failure mechanism of hydraulic structures. Digital image correlation (DIC) is a useful technique to measure a three-dimensional displacement field using two cameras without any contact with test material. The objective of this study is to evaluate the behavior of stacked geotextile tubes using the DIC technique. Geotextile tubes are stacked to build a small-scale temporary dam model to exclude water from a specific area. The horizontal and vertical displacements of four stacked geotextile tubes are monitored using a dual camera system according to the upstream water level. The geotextile tubes are prepared with two different fill materials. For each dam model, the interface layers between upper and lower geotextile tubes are either unreinforced or reinforced with a cementitious binder. The displacement of stacked geotextile tubes is measured to analyze the behavior of geotextile tubes. Experimental results show that as upstream water level increases, horizontal and vertical displacements at each layer of geotextile tubes initially increase with water level, and then remain almost constant until the subsequent water level. The displacement of stacked geotextile tubes depends on the type of fill material and interfacial reinforcement with a cementitious binder. Thus, the proposed DIC technique can be effectively used to evaluate the behavior of a hydraulic structure, which consists of geotextile tubes.

Implementation of ROS-Based Intelligent Unmanned Delivery Robot System (ROS 기반 지능형 무인 배송 로봇 시스템의 구현)

  • Seong-Jin Kong;Won-Chang Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.610-616
    • /
    • 2023
  • In this paper, we implement an unmanned delivery robot system with Robot Operating System(ROS)-based mobile manipulator, and introduce the technologies employed for the system implementation. The robot consists of a mobile robot capable of autonomous navigation inside the building using an elevator and a Selective Compliance Assembly Robot Arm(SCARA)-Type manipulator equipped with a vacuum pump. The robot can determines the position and orientation for picking up a package through image segmentation and corner detection using the camera on the manipulator. The proposed system has a user interface implemented to check the delivery status and determine the real-time location of the robot through a web server linked to the application and ROS, and recognizes the shipment and address at the delivery station through You Only Look Once(YOLO) and Optical Character Recognition(OCR). The effectiveness of the system is validated through delivery experiments conducted within a 4-story building.

Accelerometer-based Gesture Recognition for Robot Interface (로봇 인터페이스 활용을 위한 가속도 센서 기반 제스처 인식)

  • Jang, Min-Su;Cho, Yong-Suk;Kim, Jae-Hong;Sohn, Joo-Chan
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.1
    • /
    • pp.53-69
    • /
    • 2011
  • Vision and voice-based technologies are commonly utilized for human-robot interaction. But it is widely recognized that the performance of vision and voice-based interaction systems is deteriorated by a large margin in the real-world situations due to environmental and user variances. Human users need to be very cooperative to get reasonable performance, which significantly limits the usability of the vision and voice-based human-robot interaction technologies. As a result, touch screens are still the major medium of human-robot interaction for the real-world applications. To empower the usability of robots for various services, alternative interaction technologies should be developed to complement the problems of vision and voice-based technologies. In this paper, we propose the use of accelerometer-based gesture interface as one of the alternative technologies, because accelerometers are effective in detecting the movements of human body, while their performance is not limited by environmental contexts such as lighting conditions or camera's field-of-view. Moreover, accelerometers are widely available nowadays in many mobile devices. We tackle the problem of classifying acceleration signal patterns of 26 English alphabets, which is one of the essential repertoires for the realization of education services based on robots. Recognizing 26 English handwriting patterns based on accelerometers is a very difficult task to take over because of its large scale of pattern classes and the complexity of each pattern. The most difficult problem that has been undertaken which is similar to our problem was recognizing acceleration signal patterns of 10 handwritten digits. Most previous studies dealt with pattern sets of 8~10 simple and easily distinguishable gestures that are useful for controlling home appliances, computer applications, robots etc. Good features are essential for the success of pattern recognition. To promote the discriminative power upon complex English alphabet patterns, we extracted 'motion trajectories' out of input acceleration signal and used them as the main feature. Investigative experiments showed that classifiers based on trajectory performed 3%~5% better than those with raw features e.g. acceleration signal itself or statistical figures. To minimize the distortion of trajectories, we applied a simple but effective set of smoothing filters and band-pass filters. It is well known that acceleration patterns for the same gesture is very different among different performers. To tackle the problem, online incremental learning is applied for our system to make it adaptive to the users' distinctive motion properties. Our system is based on instance-based learning (IBL) where each training sample is memorized as a reference pattern. Brute-force incremental learning in IBL continuously accumulates reference patterns, which is a problem because it not only slows down the classification but also downgrades the recall performance. Regarding the latter phenomenon, we observed a tendency that as the number of reference patterns grows, some reference patterns contribute more to the false positive classification. Thus, we devised an algorithm for optimizing the reference pattern set based on the positive and negative contribution of each reference pattern. The algorithm is performed periodically to remove reference patterns that have a very low positive contribution or a high negative contribution. Experiments were performed on 6500 gesture patterns collected from 50 adults of 30~50 years old. Each alphabet was performed 5 times per participant using $Nintendo{(R)}$ $Wii^{TM}$ remote. Acceleration signal was sampled in 100hz on 3 axes. Mean recall rate for all the alphabets was 95.48%. Some alphabets recorded very low recall rate and exhibited very high pairwise confusion rate. Major confusion pairs are D(88%) and P(74%), I(81%) and U(75%), N(88%) and W(100%). Though W was recalled perfectly, it contributed much to the false positive classification of N. By comparison with major previous results from VTT (96% for 8 control gestures), CMU (97% for 10 control gestures) and Samsung Electronics(97% for 10 digits and a control gesture), we could find that the performance of our system is superior regarding the number of pattern classes and the complexity of patterns. Using our gesture interaction system, we conducted 2 case studies of robot-based edutainment services. The services were implemented on various robot platforms and mobile devices including $iPhone^{TM}$. The participating children exhibited improved concentration and active reaction on the service with our gesture interface. To prove the effectiveness of our gesture interface, a test was taken by the children after experiencing an English teaching service. The test result showed that those who played with the gesture interface-based robot content marked 10% better score than those with conventional teaching. We conclude that the accelerometer-based gesture interface is a promising technology for flourishing real-world robot-based services and content by complementing the limits of today's conventional interfaces e.g. touch screen, vision and voice.

A 2D / 3D Map Modeling of Indoor Environment (실내환경에서의 2 차원/ 3 차원 Map Modeling 제작기법)

  • Jo, Sang-Woo;Park, Jin-Woo;Kwon, Yong-Moo;Ahn, Sang-Chul
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.355-361
    • /
    • 2006
  • In large scale environments like airport, museum, large warehouse and department store, autonomous mobile robots will play an important role in security and surveillance tasks. Robotic security guards will give the surveyed information of large scale environments and communicate with human operator with that kind of data such as if there is an object or not and a window is open. Both for visualization of information and as human machine interface for remote control, a 3D model can give much more useful information than the typical 2D maps used in many robotic applications today. It is easier to understandable and makes user feel like being in a location of robot so that user could interact with robot more naturally in a remote circumstance and see structures such as windows and doors that cannot be seen in a 2D model. In this paper we present our simple and easy to use method to obtain a 3D textured model. For expression of reality, we need to integrate the 3D models and real scenes. Most of other cases of 3D modeling method consist of two data acquisition devices. One for getting a 3D model and another for obtaining realistic textures. In this case, the former device would be 2D laser range-finder and the latter device would be common camera. Our algorithm consists of building a measurement-based 2D metric map which is acquired by laser range-finder, texture acquisition/stitching and texture-mapping to corresponding 3D model. The algorithm is implemented with laser sensor for obtaining 2D/3D metric map and two cameras for gathering texture. Our geometric 3D model consists of planes that model the floor and walls. The geometry of the planes is extracted from the 2D metric map data. Textures for the floor and walls are generated from the images captured by two 1394 cameras which have wide Field of View angle. Image stitching and image cutting process is used to generate textured images for corresponding with a 3D model. The algorithm is applied to 2 cases which are corridor and space that has the four wall like room of building. The generated 3D map model of indoor environment is shown with VRML format and can be viewed in a web browser with a VRML plug-in. The proposed algorithm can be applied to 3D model-based remote surveillance system through WWW.

  • PDF

Vision-based Motion Control for the Immersive Interaction with a Mobile Augmented Reality Object (모바일 증강현실 물체와 몰입형 상호작용을 위한 비전기반 동작제어)

  • Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.12 no.3
    • /
    • pp.119-129
    • /
    • 2011
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. Especially, recent increasing demands for mobile augmented reality require the development of efficient interactive technologies between the augmented virtual object and users. This paper presents a novel approach to construct marker-less mobile augmented reality object and control the object. Replacing a traditional market, the human hand interface is used for marker-less mobile augmented reality system. In order to implement the marker-less mobile augmented system in the limited resources of mobile device compared with the desktop environments, we proposed a method to extract an optimal hand region which plays a role of the marker and augment object in a realtime fashion by using the camera attached on mobile device. The optimal hand region detection can be composed of detecting hand region with YCbCr skin color model and extracting the optimal rectangle region with Rotating Calipers Algorithm. The extracted optimal rectangle region takes a role of traditional marker. The proposed method resolved the problem of missing the track of fingertips when the hand is rotated or occluded in the hand marker system. From the experiment, we can prove that the proposed framework can effectively construct and control the augmented virtual object in the mobile environments.

Implementation of Smart Shopping Cart using Object Detection Method based on Deep Learning (딥러닝 객체 탐지 기술을 사용한 스마트 쇼핑카트의 구현)

  • Oh, Jin-Seon;Chun, In-Gook
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.7
    • /
    • pp.262-269
    • /
    • 2020
  • Recently, many attempts have been made to reduce the time required for payment in various shopping environments. In addition, for the Fourth Industrial Revolution era, artificial intelligence is advancing, and Internet of Things (IoT) devices are becoming more compact and cheaper. So, by integrating these two technologies, access to building an unmanned environment to save people time has become easier. In this paper, we propose a smart shopping cart system based on low-cost IoT equipment and deep-learning object-detection technology. The proposed smart cart system consists of a camera for real-time product detection, an ultrasonic sensor that acts as a trigger, a weight sensor to determine whether a product is put into or taken out of the shopping cart, an application for smartphones that provides a user interface for a virtual shopping cart, and a deep learning server where learned product data are stored. Communication between each module is through Transmission Control Protocol/Internet Protocol, a Hypertext Transmission Protocol network, a You Only Look Once darknet library, and an object detection system used by the server to recognize products. The user can check a list of items put into the smart cart via the smartphone app, and can automatically pay for them. The smart cart system proposed in this paper can be applied to unmanned stores with high cost-effectiveness.

Augmented Reality Authoring Tool with Marker & Gesture Interactive Features (마커 및 제스처 상호작용이 가능한 증강현실 저작도구)

  • Shim, Jinwook;Kong, Minje;Kim, Hayoung;Chae, Seungho;Jeong, Kyungho;Seo, Jonghoon;Han, Tack-Don
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.6
    • /
    • pp.720-734
    • /
    • 2013
  • In this paper, we suggest an augmented reality authoring tool system that users can easily make augmented reality contents using hand gesture and marker-based interaction methods. The previous augmented reality authoring tools are focused on augmenting a virtual object and to interact with this kind of augmented reality contents, user used the method utilizing marker or sensor. We want to solve this limited interaction method problem by applying marker based interaction method and gesture interaction method using depth sensing camera, Kinect. In this suggested system, user can easily develop simple form of marker based augmented reality contents through interface. Also, not just providing fragmentary contents, this system provides methods that user can actively interact with augmented reality contents. This research provides two interaction methods, one is marker based method using two markers and the other is utilizing marker occlusion. In addition, by recognizing and tracking user's bare hand, this system provides gesture interaction method which can zoom-in, zoom-out, move and rotate object. From heuristic evaluation about authoring tool and compared usability about marker and gesture interaction, this study confirmed a positive result.

Human-likeness of an Agent's Movement-Data Loci based on Realistically Limited Perception Data (제한적 인지 데이터에 기초한 에이전트 움직임-데이터 궤적의 인간다움)

  • Han, Chang-Hee;Kim, Won-Il
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.47 no.4
    • /
    • pp.1-10
    • /
    • 2010
  • This present paper's goal is to show a virtual human agent's movement-data loci based on realistically limited perception data is human-like. To determine human-likeness of the movement-data loci, we consider interactions between two parameters: Realistically Limited Perception (RLP) data and Incremental Movement-Path data Generation (IMPG). That is to consider how the former (i.e., RLP), one of the simulated parameters of human thought or its elements dictates the latter (i.e., IMPG), one of the simulated parameters of human movement behavior. Mapping DB is a prerequisite for navigation in an agent system because it functions as an interface between perception and movement behavior. Although Hill et al. studied mapping DB methodology based on RLP, their research dealt only with a rendering camera's view point data. The agent system in this present paper was integrated with the Hill's mapping DB module and then the two parameters' interaction was considered on a military reconnaissance mission with unexpected enemy emergence. Movement loci that were generated by the agent and subjects were compared with each other. The agent system in this present research verifies that it can be a functional test bed for producing human-like movement-data loci although the human-likeness of agent is the result of a pilot test, determined by two parameters (RLP and IMPG) and only 30 subjects.

Development of Rotation Invariant Real-Time Multiple Face-Detection Engine (회전변화에 무관한 실시간 다중 얼굴 검출 엔진 개발)

  • Han, Dong-Il;Choi, Jong-Ho;Yoo, Seong-Joon;Oh, Se-Chang;Cho, Jae-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.116-128
    • /
    • 2011
  • In this paper, we propose the structure of a high-performance face-detection engine that responds well to facial rotating changes using rotation transformation which minimize the required memory usage compared to the previous face-detection engine. The validity of the proposed structure has been verified through the implementation of FPGA. For high performance face detection, the MCT (Modified Census Transform) method, which is robust against lighting change, was used. The Adaboost learning algorithm was used for creating optimized learning data. And the rotation transformation method was added to maintain effectiveness against face rotating changes. The proposed hardware structure was composed of Color Space Converter, Noise Filter, Memory Controller Interface, Image Rotator, Image Scaler, MCT(Modified Census Transform), Candidate Detector / Confidence Mapper, Position Resizer, Data Grouper, Overlay Processor / Color Overlay Processor. The face detection engine was tested using a Virtex5 LX330 FPGA board, a QVGA grade CMOS camera, and an LCD Display. It was verified that the engine demonstrated excellent performance in diverse real life environments and in a face detection standard database. As a result, a high performance real time face detection engine that can conduct real time processing at speeds of at least 60 frames per second, which is effective against lighting changes and face rotating changes and can detect 32 faces in diverse sizes simultaneously, was developed.

A Study on Manipulating Method of 3D Game in HMD Environment by using Eye Tracking (HMD(Head Mounted Display)에서 시선 추적을 통한 3차원 게임 조작 방법 연구)

  • Park, Kang-Ryoung;Lee, Eui-Chul
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.49-64
    • /
    • 2008
  • Recently, many researches about making more comfortable input device based on gaze detection technology have been done in human computer interface. However, the system cost becomes high due to the complicated hardware and there is difficulty to use the gaze detection system due to the complicated user calibration procedure. In this paper, we propose a new gaze detection method based on the 2D analysis and a simple user calibration. Our method used a small USB (Universal Serial Bus) camera attached on a HMD (Head-Mounted Display), hot-mirror and IR (Infra-Red) light illuminator. Because the HMD is moved according to user's facial movement, we can implement the gaze detection system of which performance is not affected by facial movement. In addition, we apply our gaze detection system to 3D first person shooting game. From that, the gaze direction of game character is controlled by our gaze detection method and it can target the enemy character and shoot, which can increase the immersion and interest of game. Experimental results showed that the game and gaze detection system could be operated at real-time speed in one desktop computer and we could obtain the gaze detection accuracy of 0.88 degrees. In addition, we could know our gaze detection technology could replace the conventional mouse in the 3D first person shooting game.