• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.035 seconds

Multi-sensor Fusion based Autonomous Return of SUGV (다중센서 융합기반 소형로봇 자율복귀에 대한 연구)

  • Choi, Ji-Hoon;Kang, Sin-Cheon;Kim, Jun;Shim, Sung-Dae;Jee, Tae-Yong;Song, Jae-Bok
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.15 no.3
    • /
    • pp.250-256
    • /
    • 2012
  • Unmanned ground vehicles may be operated by remote control unit through the wireless communication or autonomously. However, the autonomous technology is still challenging and not perfectly developed. For some reason or other, the wireless communication is not always available. If wireless communication is abruptly disconnected, the UGV will be nothing but a lump of junk. What was worse, the UGV can be captured by enemy. This paper suggests a method, autonomous return technology with which the UGV can autonomously go back to a safer position along the reverse path. The suggested autonomous return technology for UGV is based on multi-correlated information based DB creation and matching. While SUGV moves by remote-control, the multi-correlated information based DB is created with the multi-sensor information; the absolute position of the trajectory is stored in DB if GPS is available and the hybrid MAP based on the fusion of VISION and LADAR is stored with the corresponding relative position if GPS is unavailable. In multi-correlated information based autonomous return, SUGV returns autonomously based on DB; SUGV returns along the trajectory based on GPS-based absolute position if GPS is available. Otherwise, the current position of SUGV is first estimated by the relative position using multi-sensor fusion followed by the matching between the query and DB. Then, the return path is created in MAP and SUGV returns automatically based on the MAP. Experimental results on the pre-built trajectory show the possibility of the successful autonomous return.

Local Minimum Free Motion Planning for Mobile Robots within Dynamic Environmetns

  • Choi, Jong-Suk;Kim, Mun-Sang;Lee, Chong-Won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1921-1926
    • /
    • 2003
  • We build a local minimum free motion planning for mobile robots considering dynamic environments by simple sensor fusion assuming that there are unknown obstacles which can be detected only partially at a time by proximity sensors and can be cleaned up or moved slowly (dynamic environments). Potential field is used as a basic platform for the motion planning. To clear local minimum problem, the partial information on the obstacles should be memorized and integrated effectively. Sets of linked line segments (SLLS) are proposed as the integration method. Then robot's target point is replaced by virtual target considering the integrated sensing information. As for the main proximity sensors, we use laser slit emission and simple web camera since the system gives more continuous data information. Also, we use ultrasonic sensors as the auxiliary sensors for simple sensor fusion considering the advantages in that they give exact information about the presence of any obstacle within certain range. By using this sensor fusion, the dynamic environments can be dealt easily. The performance of our algorithm is validated via simulations and experiments.

  • PDF

Pose Control of Mobile Inverted Pendulum using Gyro-Accelerometer (자이로-가속도센서를 이용한 모바일 역진자의 자세 제어)

  • Kang, Jin-Gu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.10
    • /
    • pp.129-136
    • /
    • 2010
  • In this paper proposed the sensor fusion algorithm between a gyroscope and an accelerometer to maintain the inverted posture with two wheels which can make the robot body move to the desired destination. Mobile inverted robot fall down to the forward or reverse direction to converge to the stable point. Therefore, precise information of tilt angles and quick posture control by using the information are necessary to maintain the inverted posture, hence this paper proposed the sensor fusion algorithm between a gyroscope to obtain the angular velocity and a accelerometer to compensate for the gyroscope. Kalman Filter is normally used for the algorithm and numerous research is progressing at the moment. However, a high-performing DSP and systems are needed for the algorithm. This paper realized the robot control method which is much simpler but able to get desired performance by using the sensor fusion algorithm and PID control.

Efficient Aggregation and Routing Algorithm using Local ID in Multi-hop Cluster Sensor Network (다중 홉 클러스터 센서 네트워크에서 속성 기반 ID를 이용한 효율적인 융합과 라우팅 알고리즘)

  • 이보형;이태진
    • Proceedings of the IEEK Conference
    • /
    • 2003.11c
    • /
    • pp.135-139
    • /
    • 2003
  • Sensor networks consist of sensor nodes with small-size, low-cost, low-power, and multi-functions to sense, to process and to communicate. Minimizing power consumption of sensors is an important issue in sensor networks due to limited power in sensor networks. Clustering is an efficient way to reduce data flow in sensor networks and to maintain less routing information. In this paper, we propose a multi-hop clustering mechanism using global and local ID to reduce transmission power consumption and an efficient routing method for improved data fusion and transmission.

  • PDF

Bio-inspired neuro-symbolic approach to diagnostics of structures

  • Shoureshi, Rahmat A.;Schantz, Tracy;Lim, Sun W.
    • Smart Structures and Systems
    • /
    • v.7 no.3
    • /
    • pp.229-240
    • /
    • 2011
  • Recent developments in Smart Structures with very large scale embedded sensors and actuators have introduced new challenges in terms of data processing and sensor fusion. These smart structures are dynamically classified as a large-scale system with thousands of sensors and actuators that form the musculoskeletal of the structure, analogous to human body. In order to develop structural health monitoring and diagnostics with data provided by thousands of sensors, new sensor informatics has to be developed. The focus of our on-going research is to develop techniques and algorithms that would utilize this musculoskeletal system effectively; thus creating the intelligence for such a large-scale autonomous structure. To achieve this level of intelligence, three major research tasks are being conducted: development of a Bio-Inspired data analysis and information extraction from thousands of sensors; development of an analytical technique for Optimal Sensory System using Structural Observability; and creation of a bio-inspired decision-making and control system. This paper is focused on the results of our effort on the first task, namely development of a Neuro-Morphic Engineering approach, using a neuro-symbolic data manipulation, inspired by the understanding of human information processing architecture, for sensor fusion and structural diagnostics.

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.

Selection and Allocation of Point Data with Wavelet Transform in Reverse Engineering (역공학에서 웨이브렛 변황을 이용한 점 데이터의 선택과 할당)

  • Ko, Tae-Jo;Kim, Hee-Sool
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.17 no.9
    • /
    • pp.158-165
    • /
    • 2000
  • Reverse engineering is reproducing products by directly extracting geometric information from physical objects such as clay model wooden mock-up etc. The fundamental work in the reverse engineering is to acquire the geometric data for modeling the objects. This research proposes a novel method for data acquisition aiming at unmanned fast and precise measurement. This is come true by the sensor fusion with CCD camera using structured light beam and touch trigger sensor. The vision system provides global information of the objects data. In this case the number of data and position allocation for touch sensor is critical in terms of the productivity since the number of vision data is very huge. So we applied wavelet transform to reduce the number of data and to allocate the position of the touch probe. The simulated and experimental results show this method is good enough for data reduction.

  • PDF

Design and Implementation fusion oil lubricator system using WLAN on based flexible link system (유연링크시스템 기반에서 WLAN 방식을 적용한 퓨전 주유시스템의 설계와 구현)

  • 김휘영
    • Proceedings of the IEEK Conference
    • /
    • 2002.06a
    • /
    • pp.283-286
    • /
    • 2002
  • For the satisfying performance of a oil lubricator, design of a oil controller for the system which meets the required specifications and its supporting hardware that keep their functioning is important. Among the hardware of a control system, oil system are most vulnerable to malfunction. Thus it is necessary to keep track of accurate and reliable oil readings for good fusion oil lubricator performance. In case of oil lubricator ,data loss, ssr trigger error faults, they are detected by examining the data system output values and the major values of the system, and then the faults are recognized by the analysis of symptoms of faults. If necessary electronic -sensor values are synthesized according to the types of faults, and then they are used for the controller instead of the raw data. In this paper, a fast-32bit cpu micorprocessor applied to the control of flexible link system with the sensor fault problems in the error module fer exact positioning to show the applicability. It is shown that the fusion oil lubricator can provide a satisfactory loop performance even when the sensor faults occure

  • PDF

A Study on Odometry Error Compensation using Multisensor fusion for Mobile Robot Navigation (멀티센서 융합을 이용한 자율이동로봇의 주행기록계 에러 보상에 관한 연구)

  • Song, Sin-Woo;Park, Mun-Soo;Hong, Suk-Kyo
    • Proceedings of the KIEE Conference
    • /
    • 2001.11c
    • /
    • pp.288-291
    • /
    • 2001
  • This paper present effective odometry error compensation using multisensor fusion for the accurate positioning of mobile robot in navigation. During obstacle avoidance and wall following of mobile robot, position estimates obtained by odometry become unrealistic and useless because of its accumulated errors. To measure the position and heading direction of mobile robot accurately, odometry sensor a gyroscope and an azimuth sensor are mounted on mobile robot and Complementary-filter is designed and implemented in order to compensate complementary drawback of each sensor and fuse their information. The experimental results show that the multisensor fusion system is more accurate than odometry only in estimation of the position and direction of mobile robot.

  • PDF

Real Time Motion Processing for Autonomous Navigation

  • Kolodko, J.;Vlacic, L.
    • International Journal of Control, Automation, and Systems
    • /
    • v.1 no.1
    • /
    • pp.156-161
    • /
    • 2003
  • An overview of our approach to autonomous navigation is presented showing how motion information can be integrated into existing navigation schemes. Particular attention is given to our short range motion estimation scheme which utilises a number of unique assumptions regarding the nature of the visual environment allowing a direct fusion of visual and range information. Graduated non-convexity is used to solve the resulting non-convex minimisation problem. Experimental results show the advantages of our fusion technique.