• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.041 seconds

A Moving Object Tracking System from a Moving Camera by Integration of Motion Estimation and Double Difference (BBME와 DD를 통합한 움직이는 카메라로부터의 이동물체 추적 시스템)

  • 설성욱;송진기;장지혜;이철헌;남기곤
    • Journal of KIISE:Software and Applications
    • /
    • v.31 no.2
    • /
    • pp.173-181
    • /
    • 2004
  • In this paper, we propose a system for automatic moving object detection and tracking in sequence images acquired from a moving camera. The proposed algorithm consists of moving object detection and its tracking. Moving object can be detected by integration of BBME and DD method We segment the detected object using histogram back projection, match it using histogram intersection, extract and track it using XY-projection. Computer simulation results have shown that the proposed algorithm is reliable and can successfully detect and track a moving object on image sequences obtained by a moving camera.

A NEW APPROACH OF CAMERA MODELING FOR LINEAR PUSHBROOM IMAGES

  • Jung, Hyung-Sup;Kang, Myung-Ho;Lee, Yong-Woong;Won, Joong-Sun
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.1162-1164
    • /
    • 2003
  • The methods of the geometric reconstruction and sensor calibration of satellite linear pushbroom images are investigated. The model of the sensor used is based on the SPOT model that is developed by Kraiky. The satellite trajectory is a Keplerian trajectory in the approximation. Four orbit parameters, longitude of the ascending node(${\omega}$), inclination of the orbit plan(I), latitude argument of the satellite(W) and distance between earth center and satellite, are used for the camera modeling. Time-dependent orbit parameters are expressed by quadratic polynomials. SPOT-5 images have been used for validation tests. The results are that the RMSE acquired from 20 GCPs is 1.763m and the RMSE of 5 checking points 2.470m. Because the ground resolution of SPOT-5 is 2.5m, the result obtained in this study has a good accuracy. It demonstrates that the sensor model developed by this study can be used to reconstruct the geometry of satellite image using pushbroom camera.

  • PDF

A Monocular Vision Based Technique for Estimating Direction of 3D Parallel Lines and Its Application to Measurement of Pallets (모노 비전 기반 3차원 평행직선의 방향 추정 기법 및 파렛트 측정 응용)

  • Kim, Minhwan;Byun, Sungmin;Kim, Jin
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1254-1262
    • /
    • 2018
  • Many parallel lines may be shown in our real life and they are useful for analyzing structure of objects or buildings. In this paper, a vision based technique for estimating three-dimensional direction of parallel lines is suggested, which uses a calibrated camera and is applicable to an image being captured from the camera. Correctness of the technique is theoretically described and discussed in this paper. The technique is well applicable to measurement of orientation of a pallet in a warehouse, because a pair of parallel lines is well detected in the front plane of the pallet. Thereby the technique enables a forklift with a well-calibrated camera to engage the pallet automatically. Such a forklift in a warehouse can engage a pallet on a storing rack as well as one on the ground. Usefulness of the suggested technique for other applications is also discussed. We conducted an experiment of measuring a real commercial pallet with various orientation and distance and found for the technique to work correctly and accurately.

Development of Real-time Flatness Measurement System of COF Film using Pneumatic Pressure (공압을 이용한 COF 필름의 실시간 위치 평탄도 측정 시스템 개발)

  • Kim, Yong-Kwan;Kim, JaeHyun;Lee, InHwan
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.2
    • /
    • pp.101-106
    • /
    • 2021
  • In this paper, an inspection system has been developed where pneumatic instruments are used to stretch the film using compressed air, thus the curl problem can be overcome. When the pneumatic system is applied, a line scan camera should be used instead of an area camera because the COF surface makes an arc by the air pressure. The distance between the COF and the inspection camera should be kept constant to get a clear image, thus the position of COF is to be monitored on real-time. An operating software has been also developed which is switching on/off the pneumatic system, determining the COF position using a camera vision, displaying the contour of the COF side view, sending self-diagnosis result and etc. The developed system has been examined using the actual roll of COF, which convince that the system can be an effective device to inspect the COF rolls in process.

Implementation of Enhanced Vision for an Autonomous Map-based Robot Navigation

  • Roland, Cubahiro;Choi, Donggyu;Kim, Minyoung;Jang, Jongwook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.41-43
    • /
    • 2021
  • Robot Operating System (ROS) has been a prominent and successful framework used in robotics business and academia.. However, the framework has long been focused and limited to navigation of robots and manipulation of objects in the environment. This focus leaves out other important field such as speech recognition, vision abilities, etc. Our goal is to take advantage of ROS capacity to integrate additional libraries of programming functions aimed at real-time computer vision with a depth-image camera. In this paper we will focus on the implementation of an upgraded vision with the help of a depth camera which provides a high quality data for a much enhanced and accurate understanding of the environment. The varied data from the cameras are then incorporated in ROS communication structure for any potential use. For this particular case, the system will use OpenCV libraries to manipulate the data from the camera and provide a face-detection capabilities to the robot, while navigating an indoor environment. The whole system has been implemented and tested on the latest technologies of Turtlebot3 and Raspberry Pi4.

  • PDF

Egocentric Vision for Human Activity Recognition Using Deep Learning

  • Malika Douache;Badra Nawal Benmoussat
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.730-744
    • /
    • 2023
  • The topic of this paper is the recognition of human activities using egocentric vision, particularly captured by body-worn cameras, which could be helpful for video surveillance, automatic search and video indexing. This being the case, it could also be helpful in assistance to elderly and frail persons for revolutionizing and improving their lives. The process throws up the task of human activities recognition remaining problematic, because of the important variations, where it is realized through the use of an external device, similar to a robot, as a personal assistant. The inferred information is used both online to assist the person, and offline to support the personal assistant. With our proposed method being robust against the various factors of variability problem in action executions, the major purpose of this paper is to perform an efficient and simple recognition method from egocentric camera data only using convolutional neural network and deep learning. In terms of accuracy improvement, simulation results outperform the current state of the art by a significant margin of 61% when using egocentric camera data only, more than 44% when using egocentric camera and several stationary cameras data and more than 12% when using both inertial measurement unit (IMU) and egocentric camera data.

Distortion Removal and False Positive Filtering for Camera-based Object Position Estimation (카메라 기반 객체의 위치인식을 위한 왜곡제거 및 오검출 필터링 기법)

  • Sil Jin;Jimin Song;Jiho Choi;Yongsik Jin;Jae Jin Jeong;Sang Jun Lee
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.1-8
    • /
    • 2024
  • Robotic arms have been widely utilized in various labor-intensive industries such as manufacturing, agriculture, and food services, contributing to increasing productivity. In the development of industrial robotic arms, camera sensors have many advantages due to their cost-effectiveness and small sizes. However, estimating object positions is a challenging problem, and it critically affects to the robustness of object manipulation functions. This paper proposes a method for estimating the 3D positions of objects, and it is applied to a pick-and-place task. A deep learning model is utilized to detect 2D bounding boxes in the image plane, and the pinhole camera model is employed to compute the object positions. To improve the robustness of measuring the 3D positions of objects, we analyze the effect of lens distortion and introduce a false positive filtering process. Experiments were conducted on a real-world scenario for moving medicine bottles by using a camera-based manipulator. Experimental results demonstrated that the distortion removal and false positive filtering are effective to improve the position estimation precision and the manipulation success rate.

Effective Compression Technique of Multi-view Image expressed by Layered Depth Image (계층적 깊이 영상으로 표현된 다시점 영상의 효과적인 압축 기술)

  • Jee, Inn-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.14 no.4
    • /
    • pp.29-37
    • /
    • 2014
  • Since multi-view video exists a number of camera color image and depth image, it has a huge of data. Thus, a new compression technique is indispensable for reducing this data. Recently, the effective compression encoding technique for multi-view video that used in layered depth image concepts is a remarkable. This method uses several view point of depth information and warping function, synthesizes multi-view color and depth image, becomes one data structure. In this paper we use actual distance for solving overlap in layered depth image that reduce required data for reconstructing in color-based transform. In experimental results, we confirmed high compression performance and good quality of reconstructed image.

The Faulty Detection of COG Using Image Subtraction (이미지 정합을 이용한 COG 불량 검출)

  • Joo, Ki-See
    • Proceedings of KOSOMES biannual meeting
    • /
    • 2005.11a
    • /
    • pp.203-208
    • /
    • 2005
  • The CGO (Chip on Glass) to be measured a few micro unit is captured by line scan camera for the accuracy of chip inspection. But it is very sensitive to scan speed and lighting conditions. In this paper, we propose the methods to increase the accuracy of faulty detection by image subtraction. Image subtraction is detected faultiness by subtracting the image of a ' perfect ' COG from trot of the sample under tests. For image subtraction to be successful, the two images must be pre챠sely registered The two images is registered by the area segmentation pattern matching, and the result image get by operating the gradient mask image and the image to practice subtraction. A series of experimentation showed that the proposed algorithm shows substantial improvement over the other image subtraction methods.

  • PDF

Development of Hybrid Image Stabilization System for a Mobile Robot (이동 로봇을 위한 하이브리드 이미지 안정화 시스템의 개발)

  • Choi, Yun-Won;Kang, Tae-Hun;Saitov, Dilshat;Lee, Dong-Chun;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.2
    • /
    • pp.157-163
    • /
    • 2011
  • This paper proposes a hybrid image stabilizing system which uses both optical image stabilizing system based on EKF (Extended Kalman Filter) and digital image stabilization based on SURF (Speeded Up Robust Feature). Though image information is one of the most efficient data for object recognition, it is susceptible to noise which results from internal vibration as well as external factors. The blurred image obtained by the camera mounted on a robot makes it difficult for the robot to recognize its environment. The proposed system estimates shaking angle through EKF based on the information from inclinometer and gyro sensor to stabilize the image. In addition, extracting the feature points around rotation axis using SURF which is robust to change in scale or rotation enhances processing speed by removing unnecessary operations using Hessian matrix. The experimental results using the proposed hybrid system shows its effectiveness in extended frequency range.