• Title/Summary/Keyword: fusion of sensor information

Search Result 410, Processing Time 0.025 seconds

Monitoring of the GMAW Process Using Infra-red Sensor (적외선 센서를 이용한 금속아크 용접 공정 모니터링)

  • 정영재;김일수;박창언;김수광
    • Proceedings of the KWS Conference
    • /
    • 1996.10a
    • /
    • pp.142-144
    • /
    • 1996
  • This paper discusses the application of infra-red thermography in monitoring the robotic arc welding process, and it's potential for weld bead dimension and seam tracking control. Thermal images illustrating weld pool formation dynamics and heat distribution phenomena are digitized and their characteristics are measured. At each sampling point the maximum depth of penetration is recorded together with additional information regarding weld bead placement in relation to the seam location. Deficiencies such as incomplete penetration and lack of side wall fusion are readily identified and can be remained during the process. The technique can help an increase in productivity and weld quality by minimizing the amount of post process rework and inspection efforts needed otherwise.

  • PDF

Fitting to Panchromatic Image for Pansharpening Combining Point-Jacobian MAP Estimation

  • Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.5
    • /
    • pp.525-533
    • /
    • 2008
  • This study presents a pansharpening method, so called FitPAN, to synthesize multispectral images at a higher resolution by exploiting a high-resolution image acquired in panchromatic modality. FitPAN is a modified version of the quadratic programming approach proposed in (Lee, 2008), which is designed to generate synthesized multispectral images similar to the multispectral images that would have been observed by the corresponding sensor at the same high resolution. The proposed scheme aims at reconstructing the multispectral images at the higher resolution with as less spectral distortion as possible. This study also proposes a sharpening process to eliminate some distortions appeared in the fused image of the higher resolution. It employs the Point-Jacobian MAP iteration utilizing the contextual information of the original panchromatic image. In this study, the new method was applied to the IKONOS 1m panchromatic and 4m multispectral data, and the results were compared with them of several current approaches. Experimental results demonstrate that the proposed scheme can achieve significant improvement in both spectral and block distortion.

Robot controller with 32-bit DSP chip (32 비트 DSP를 사용한 로보트 제어기의 개발)

  • 김성권;황찬영;전병환;이규철;홍용준
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1991.10a
    • /
    • pp.292-298
    • /
    • 1991
  • A new 6-axis robot controller with a high-speed 32-bit floating-point DSP TMS32OC30 has been developed in Samsung Electronics. The controller composed of Intel 80386 microprocessor for the main controller, and TKS32OC30 DSP chip for joint position controller. The characteristics of the controller are high sampling rate of 200us and fast reponsibility. The main controller supports MS-DOS, kinematics, trajectory planning, and sensor fusion functions which are vision, PLC, and MAP. The one high speed DSP chip is used for controlling 6 axes of a robot in 200us simultaneously. The control law applied is PID controller including a velocity feedforvard in joint position controller. The performance tests, such as command following, CP, were conducted for the controller integrated with a 6 axes robot developed in Samsung Electronics. The results showed a good performance. This controller can also perform the system control with other controllers, the communication with high priority controllers, and visual information processing.

  • PDF

Particle Filter Based Robust Multi-Human 3D Pose Estimation for Vehicle Safety Control (차량 안전 제어를 위한 파티클 필터 기반의 강건한 다중 인체 3차원 자세 추정)

  • Park, Joonsang;Park, Hyungwook
    • Journal of Auto-vehicle Safety Association
    • /
    • v.14 no.3
    • /
    • pp.71-76
    • /
    • 2022
  • In autonomous driving cars, 3D pose estimation can be one of the effective methods to enhance safety control for OOP (Out of Position) passengers. There have been many studies on human pose estimation using a camera. Previous methods, however, have limitations in automotive applications. Due to unexplainable failures, CNN methods are unreliable, and other methods perform poorly. This paper proposes robust real-time multi-human 3D pose estimation architecture in vehicle using monocular RGB camera. Using particle filter, our approach integrates CNN 2D/3D pose measurements with available information in vehicle. Computer simulations were performed to confirm the accuracy and robustness of the proposed algorithm.

MEMS 센서대상 오류주입 공격 및 대응방법

  • Cho, Hyunsu;Lee, Sunwoo;Choi, Wonsuk
    • Review of KIISC
    • /
    • v.31 no.1
    • /
    • pp.15-23
    • /
    • 2021
  • 자율주행 시스템이 탑재되어 있는 무인이동체는 운용환경에 따라 공중, 해상, 육상 무인이동체로 분류할 수 있고 모든 분야에서 관련 기술 개발이 활발히 진행되고 있다. 무인이동체는 자율주행 시스템이 탑재되어 외부 환경을 스스로 인식해 상황을 판단하는 특징을 갖고 있다. 따라서, 무인이동체는 센서로부터 수집되는 데이터를 이용하여 주변 환경을 인식해야 한다. 이러한 이유로 보안 (Security) 분야에서는 무인이동체에 탑재되는 센서를 대상으로 신호 오류주입을 수행하여 해당 무인이동체의 오동작을 유발하는 연구결과들이 최근 발표되고 있다. 신호 오류주입공격은 물리레벨 (PHY-level) 에서 수행되기 때문에, 공격 수행 여부를 소프트웨어 레벨에서 탐지하는 것은 매우 어렵다는 특징을 갖고 있다. 현재까지 신호 오류주입 공격을 탐지할 수 있는 방법은 다수의 센서를 이용하는 센서퓨전 (Sensor Fusion)을 기반으로 하는 방법이 있다. 하지만, 현실적으로 하나의 무인이동체에 동일한 기능을 하는 센서 여러 개를 중복해서 탑재하는 것은 어려움이 있다. 그리고 단일 센서만을 이용하여 신호 오류주입 공격을 탐지하는 방법에 대해서는 아직까지 연구가 진행되고 있지 않다. 본 논문에서는 무인이동체 환경에서 가장 널리 사용되고 있는 MEMS 센서를 대상으로 신호 오류주입 공격을 재연하고, 단일 센서 환경에서 해당 공격을 탐지할 수 있는 방법에 대하여 제안한다.

Research on soil composition measurement sensor configuration and UI implementation (토양 성분 측정 센서 구성 및 UI 구현에 관한 연구)

  • Ye Eun Park;Jin Hyoung Jeong;Jae Hyun Jo;Young Yoon Chang;Sang Sik Lee
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.17 no.1
    • /
    • pp.76-81
    • /
    • 2024
  • Recently, agricultural methods are changing from experience-based agriculture to data-based agriculture. Changes in agricultural production due to the 4th Industrial Revolution are largely occurring in three areas: smart sensing and monitoring, smart analysis and planning, and smart control. In order to realize open-field smart agriculture, information on the physical and chemical properties of soil is essential. Conventional physicochemical measurements are conducted in a laboratory after collecting samples, which consumes a lot of cost, labor, and time, so they are quickly measured in the field. Measurement technology that can do this is urgently needed. In addition, a soil analysis system that can be carried and moved by the measurer and used in Korea's rice fields, fields, and facility houses is needed. To solve this problem, our goal is to develop and commercialize software that can collect soil samples and analyze the information. In this study, basic soil composition measurement was conducted using soil composition measurement sensors consisting of hardness measurement and electrode sensors. Through future research, we plan to develop a system that applies soil sampling using a CCD camera, ultrasonic sensor, and sampler. Therefore, we implemented a sensor and soil analysis UI that can measure and analyze the soil condition in real time, such as hardness measurement display using a load cell and moisture, PH, and EC measurement display using conductivity.

Three-dimensional Machine Vision System based on moire Interferometry for the Ball Shape Inspection of Micro BGA Packages (마이크로 BGA 패키지의 볼 형상 시각검사를 위한 모아레 간섭계 기반 3차원 머신 비젼 시스템)

  • Kim, Min-Young
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.19 no.1
    • /
    • pp.81-87
    • /
    • 2012
  • This paper focuses on three-dimensional measurement system of micro balls on micro Ball-Grid-Array(BGA) packages in-line. Most of visual inspection system still suffers from sophisticate reflection characteristics of micro balls. For accurate shape measurement of them, a specially designed visual sensor system is proposed under the sensing principle of phase shifting moire interferometry. The system consists of a pattern projection system with four projection subsystems and an imaging system. In the projection system, four subsystems have spatially different projection directions to make target objects experience the pattern illuminations with different incident directions. For the phase shifting, each grating pattern of subsystem is regularly moved by PZT actuator. To remove specular noise and shadow area of BGA balls efficiently, a compact multiple-pattern projection and imaging system is implemented and tested. Especially, a sensor fusion algorithm to integrate four information sets, acquired from multiple projections, into one is proposed with the basis of Bayesian sensor fusion theory. To see how the proposed system works, a series of experiments is performed and the results are analyzed in detail.

The Study on the Development of the HD(High Definition) Level Triple Streaming Hybrid Security Camera (HD급 트리플 스트리밍 하이브리드 보안 카메라 개발에 관한 연구)

  • Lee, JaeHee;Cho, TaeKyung;Seo, ChangJin
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.66 no.4
    • /
    • pp.252-257
    • /
    • 2017
  • In this paper for developing and implementing the HD level triple streaming hybrid security camera which output the three type of video outputs(HD-SDI, EX-SDI, Analog). We design the hardware and program the firmware supporting the main and sub functions. We use MN34229PL as image sensor, EN778, EN331 as image processor, KA909A as reset, iris, day&night function part, A3901SEJTR-T as zoom/focus control part. We request the performance test of developed security camera at the broadcasting and communication fusion testing department of TTA (Telecommunication Technology Association). We can get the three outputs (HD-SDI, EX-SDI, Analog) from the developed security camera, get the world best level at the jitter and eye pattern amplitude value and exceed the world best level at the signal/noise ratio, and minium illumination, power consumption part. The HD level triple streaming hybrid security camera in this paper will be widely used at the security camera because of the better performance and function.

Asynchronous Guidance Filter Design Based on Strapdown Seeker and INS Information (스트랩다운 탐색기 및 INS 정보를 이용한 비동기 유도필터 설계)

  • Park, Jang-Seong;Kim, Yun-young;Park, Sanghyuk;Kim, Yoon-Hwan
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.48 no.11
    • /
    • pp.873-880
    • /
    • 2020
  • In this paper, we propose a guidance filter to estimate line of sight rate with strapdown seeker measurements and INS(Inertial Navigation System) information. The measurements of proposed guidance filter consisted of the LOS(Line of Sight) and relative position that can be calculated with the seeker's measurements, INS information and known target position, also the filter is based on an asynchronous filter to use outputs of the two sensors that are out of synchronous and period. Through the proposed filter, we can reduce the effect on parasitic loop that can be caused by using large time delay seeker and improve the estimation performance.

Foreground Segmentation and High-Resolution Depth Map Generation Using a Time-of-Flight Depth Camera (깊이 카메라를 이용한 객체 분리 및 고해상도 깊이 맵 생성 방법)

  • Kang, Yun-Suk;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.751-756
    • /
    • 2012
  • In this paper, we propose a foreground extraction and depth map generation method using a time-of-flight (TOF) depth camera. Although, the TOF depth camera captures the scene's depth information in real-time, it has a built-in noise and distortion. Therefore, we perform several preprocessing steps such as image enhancement, segmentation, and 3D warping, and then use the TOF depth data to generate the depth-discontinuity regions. Then, we extract the foreground object and generate the depth map as of the color image. The experimental results show that the proposed method efficiently generates the depth map even for the object boundary and textureless regions.