• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.035 seconds

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

New Filtering Method for Reducing Registration Error of Distributed Sensors (분산된 센서들의 Registration 오차를 줄이기 위한 새로운 필터링 방법)

  • Kim, Yong-Shik;Lee, Jae-Hoon;Do, Hyun-Min;Kim, Bong-Keun;Tanikawa, Tamio;Ohba, Kohtaro;Lee, Ghang;Yun, Seok-Heon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.3
    • /
    • pp.176-185
    • /
    • 2008
  • In this paper, new filtering method for sensor registration is provided to estimate and correct error of registration parameters in multiple sensor environments. Sensor registration is based on filtering method to estimate registration parameters in multiple sensor environments. Accuracy of sensor registration can increase performance of data fusion method selected. Due to various error sources, the sensor registration has registration errors recognized as multiple objects even though multiple sensors are tracking one object. In order to estimate the error parameter, new nonlinear information filtering method is developed using minimum mean square error estimation. Instead of linearization of nonlinear function like an extended Kalman filter, information estimation through unscented prediction is used. The proposed method enables to reduce estimation error without a computation of the Jacobian matrix in case that measurement dimension is large. A computer simulation is carried out to evaluate the proposed filtering method with an extended Kalman filter.

  • PDF

Map-Building and Position Estimation based on Multi-Sensor Fusion for Mobile Robot Navigation in an Unknown Environment (이동로봇의 자율주행을 위한 다중센서융합기반의 지도작성 및 위치추정)

  • Jin, Tae-Seok;Lee, Min-Jung;Lee, Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.5
    • /
    • pp.434-443
    • /
    • 2007
  • Presently, the exploration of an unknown environment is an important task for thee new generation of mobile service robots and mobile robots are navigated by means of a number of methods, using navigating systems such as the sonar-sensing system or the visual-sensing system. To fully utilize the strengths of both the sonar and visual sensing systems. This paper presents a technique for localization of a mobile robot using fusion data of multi-ultrasonic sensors and vision system. The mobile robot is designed for operating in a well-structured environment that can be represented by planes, edges, comers and cylinders in the view of structural features. In the case of ultrasonic sensors, these features have the range information in the form of the arc of a circle that is generally named as RCD(Region of Constant Depth). Localization is the continual provision of a knowledge of position which is deduced from it's a priori position estimation. The environment of a robot is modeled into a two dimensional grid map. we defines a vision-based environment recognition, phisically-based sonar sensor model and employs an extended Kalman filter to estimate position of the robot. The performance and simplicity of the approach is demonstrated with the results produced by sets of experiments using a mobile robot.

Object Detection Method on Vision Robot using Sensor Fusion (센서 융합을 이용한 이동 로봇의 물체 검출 방법)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.249-254
    • /
    • 2007
  • A mobile robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. We focus on how to detect a object region well using image processing algorithm because it gives robots the ability of working for human. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. Shape information and signature algorithm are used to segment the objects from background regardless of shape changes. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

A Study on the Indoor Navigation of Guiding Robot for the Visually Impaired Using Sensor Fusion (센서 퓨전을 이용한 시각 장애인 유도 로봇의 실내주행 연구)

  • Jang, Chul-Woong;Jung, Ki-Ho;Yeom, Moon-Jin;Shim, Hyun-Min;Hong, Yeong-Ki;Shim, Jae-Hong;Lee, Eung-Hyuk
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.923-924
    • /
    • 2006
  • In this paper, we propose the sensor fusing method for the obstacle avoidance of guiding robot for the visually impaired In our system, we acquire obstacles distances information using ultrasonic sensors, and its width is acquired by image sensor. We also compute avoidance angle using are distance and width information gained by sensor. After the robot avoid the obstacle by computed angle, the robot returns to its original path using odometry. The robot consists of the SA1110-based controller, sensory part using sonar array and image sensor, and motion part using differential drive for climbing stairs. This system use the embedded linux for OS, and also is developed by the QT/Embedded for GUI.

  • PDF

Positional Tracking System Using Smartphone Sensor Information

  • Kim, Jung Yee
    • Journal of Multimedia Information System
    • /
    • v.6 no.4
    • /
    • pp.265-270
    • /
    • 2019
  • The technology to locate an individual has enabled various services, its utilization has increased. There were constraints such as the use of separate expensive equipment or the installation of specific devices on a facility, with most of the location technology studies focusing on the accuracy of location verification. These constraints can result in accuracy within a few tens of centimeters, but they are not technology that can be applied to a user's location in real-time in daily life. Therefore, this paper aims to track the locations of smartphones only using the basic components of smartphones. Based on smartphone sensor data, localization accuracy that can be used for verification of the users' locations is aimed at. Accelerometers, Wifi radio maps, and GPS sensor information are utilized to implement it. In forging the radio map, signal maps were built at each vertex based on the graph data structure This approach reduces traditional map-building efforts at the offline phase. Accelerometer data were made to determine the user's moving status, and the collected sensor data were fused using particle filters. Experiments have shown that the average user's location error is about 3.7 meters, which makes it reasonable for providing location-based services in everyday life.

CALOS : Camera And Laser for Odometry Sensing (CALOS : 주행계 추정을 위한 카메라와 레이저 융합)

  • Bok, Yun-Su;Hwang, Young-Bae;Kweon, In-So
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.180-187
    • /
    • 2006
  • This paper presents a new sensor system, CALOS, for motion estimation and 3D reconstruction. The 2D laser sensor provides accurate depth information of a plane, not the whole 3D structure. On the contrary, the CCD cameras provide the projected image of whole 3D scene, not the depth of the scene. To overcome the limitations, we combine these two types of sensors, the laser sensor and the CCD cameras. We develop a motion estimation scheme appropriate for this sensor system. In the proposed scheme, the motion between two frames is estimated by using three points among the scan data and their corresponding image points, and refined by non-linear optimization. We validate the accuracy of the proposed method by 3D reconstruction using real images. The results show that the proposed system can be a practical solution for motion estimation as well as for 3D reconstruction.

  • PDF

Environment Adaptive Emergency Evacuation Route GUIDE through Digital Signage Systems

  • Lee, Dongwoo;Kim, Daehyun;Lee, Junghoon;Lee, Seungyoun;Hwang, Hyunsuk;Mariappan, Vinayagam;Lee, Minwoo;Cha, Jaesang
    • International Journal of Advanced Culture Technology
    • /
    • v.5 no.1
    • /
    • pp.90-97
    • /
    • 2017
  • Nowadays, the most of commercial buildings are build-out with complex architecture and decorated with more complicated interiors of buildings so establishing intelligible escape routes becomes an important case of fire or other emergency in a limited time. The commercial buildings are already equipped with multiple exit signs and these exit signs may create confusion and leads the people into different directions under emergency. This can jeopardize the emergency situation into a chaotic state, especially in a complex layout buildings. There are many research focused on implementing different approached to improve the exit sign system with better visual navigating effects, such as the use of laser beams, the combination of audio and video cues, etc. However the digital signage system based emergency exit sign management is one of the best solution to guide people under emergency situations to escape. This research paper, propose an intelligent evacuation route GUIDE that uses the combination centralized Wireless Sensor Networks (WSN) and digital signage for people safety and avoids dangers from emergency conditions. This proposed system applies WSN to detect the environment condition in the building and uses an evacuation algorithm to estimate the safe route to escape using the sensor information and then activates the signage system to display the safe evacuation route instruction GUIDE according to the location the signage system is installed. This paper presented the prototype of the proposed signage system and execution time to find the route with future research directions. The proposed system provides a natural intelligent evacuation route interface for self or remote operation in facility management to efficiently GUIDE people to the safe exit under emergency conditions.

Analysis of 3D Reconstruction Accuracy by ToF-Stereo Fusion (ToF와 스테레오 융합을 이용한 3차원 복원 데이터 정밀도 분석 기법)

  • Jung, Sukwoo;Lee, Youn-Sung;Lee, KyungTaek
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.466-468
    • /
    • 2022
  • 3D reconstruction is important issue in many applications such as Augmented Reality (AR), eXtended Reality (XR), and Metaverse. For 3D reconstruction, depth map can be acquired by stereo camera and time-of-flight (ToF) sensor. We used both sensors complementarily to improve the accuracy of 3D information of the data. First, we applied general multi-camera calibration technique which uses both color and depth information. Next, the depth map of the two sensors are fused by 3D registration and reprojection approach. The fused data is compared with the ground truth data which is reconstructed using RTC360 sensor. We used Geomagic Wrap to analysis the average RMSE of the two data. The proposed procedure was implemented and tested with real-world data.

  • PDF

Posture control of buoyancy sculptures using drone technology (드론 기술을 이용한 부력 조형물의 자세 제어)

  • Kang, Jingu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.1-7
    • /
    • 2018
  • The floating sculptures in the form of ad-ballon commonly used ropes in order to hold on. Relatively air flow is much less indoor than outdoor. Users of buoyancy sculptures hope to be able to maintain their desired posture without being fixed. This study applied drone technology to buoyancy sculptures. The drones can be moved vertically and horizontally, and the posture can be maintained, so buoyancy sculptures are easy to apply. Therefore, we have studied the control system of buoyancy sculpture using drone technology. Also, a control system that can maintain the desired posture at a constant height was studied. The overall shape was a light fiber material and helium gas for zero buoyancy to support the sculpture. The system configuration was STM32F103CB from ARM. In addition, the gyro and acceleration, geomagnetic sensors and motors are composed of small and medium size BLDC motors. The scheduling of the control system in the configuration of the control device was carefully considered. Because the role of the whole component becomes very important. The communication between the components is divided into the sensor fusion and the interface communication with the whole controller. Each communication technology is designed to expand. This study was implemented to actively respond from the viewpoint of posture control using the drone technology.