• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.03 seconds

Estimation of Wind Velocity Using Motion Tracking of a Balloon (풍선의 움직임 추적을 이용한 바람 속도 벡터 추정)

  • Oh, Hyeju;Jo, Sungbeom;Choi, Keeyoung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.42 no.10
    • /
    • pp.833-841
    • /
    • 2014
  • This paper proposes an algorithm to estimate the wind velocity by tracking free flying balloons. Balloons used in this method are expendable but inexpensive, which increases the usefulness of the method. Also we can obtain accurate 3D information by using multiple cameras and estimate the wind velocity of the local field. The proposed system consists of aerodynamic modeling of the balloon, a tracking algorithm using image processing, and the velocity estimation algorithm. We performed unit tests of each algorithm for the verification. The method is validated using a system simulation and sources of error case identified.

Intelligent Hexapod Mobile Robot using Image Processing and Sensor Fusion (영상처리와 센서융합을 활용한 지능형 6족 이동 로봇)

  • Lee, Sang-Mu;Kim, Sang-Hoon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.4
    • /
    • pp.365-371
    • /
    • 2009
  • A intelligent mobile hexapod robot with various types of sensors and wireless camera is introduced. We show this mobile robot can detect objects well by combining the results of active sensors and image processing algorithm. First, to detect objects, active sensors such as infrared rays sensors and supersonic waves sensors are employed together and calculates the distance in real time between the object and the robot using sensor's output. The difference between the measured value and calculated value is less than 5%. This paper suggests effective visual detecting system for moving objects with specified color and motion information. The proposed method includes the object extraction and definition process which uses color transformation and AWUPC computation to decide the existence of moving object. We add weighing values to each results from sensors and the camera. Final results are combined to only one value which represents the probability of an object in the limited distance. Sensor fusion technique improves the detection rate at least 7% higher than the technique using individual sensor.

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

Moving Object Trajectory based on Kohenen Network for Efficient Navigation of Mobile Robot

  • Jin, Tae-Seok
    • Journal of information and communication convergence engineering
    • /
    • v.7 no.2
    • /
    • pp.119-124
    • /
    • 2009
  • In this paper, we propose a novel approach to estimating the real-time moving trajectory of an object is proposed in this paper. The object's position is obtained from the image data of a CCD camera, while a state estimator predicts the linear and angular velocities of the moving object. To overcome the uncertainties and noises residing in the input data, a Extended Kalman Filter(EKF) and neural networks are utilized cooperatively. Since the EKF needs to approximate a nonlinear system into a linear model in order to estimate the states, there still exist errors as well as uncertainties. To resolve this problem, in this approach the Kohonen networks, which have a high adaptability to the memory of the input-output relationship, are utilized for the nonlinear region. In addition to this, the Kohonen network, as a sort of neural network, can effectively adapt to the dynamic variations and become robust against noises. This approach is derived from the observation that the Kohonen network is a type of self-organized map and is spatially oriented, which makes it suitable for determining the trajectories of moving objects. The superiority of the proposed algorithm compared with the EKF is demonstrated through real experiments.

Development of Automated Surface Inspection System using the Computer V (컴퓨터 비젼을 이용한 표면결함검사장치 개발)

  • Lee, Jong-Hak;Jung, Jin-Yang
    • Proceedings of the KIEE Conference
    • /
    • 1999.07b
    • /
    • pp.668-670
    • /
    • 1999
  • We have developed a automatic surface inspection system for cold Rolled strips in steel making process for several years. We have experienced the various kinds of surface inspection systems, including linear CCD camera type and the laser type inspection system which was installed in cold rolled strips production lines. But, we did not satisfied with these inspection systems owing to insufficient detection and classification rate, real time processing performance and limited line speed of real production lines. In order to increase detection and computing power, we have used the Dark Field illumination with Infra_Red LED, Bright Field illumination with Xenon Lamp, Parallel Computing Processor with Area typed CCD camera and full software based image processing technique for the ease up_grading and maintenance. In this paper, we introduced the automatic inspection system and real time image processing technique using the Object Detection, Defect Detection, Classification algorithms. As a result of experiment, under the situation of the high speed processed line(max 1000 meter per minute) defect detection is above 90% for all occurred defects in real line, defect name classification rate is about 80% for most frequently occurred 8 defect, and defect grade classification rate is 84% for name classified defect.

  • PDF

A Face Tracking Algorithm for Multi-view Display System

  • Han, Chung-Shin;Go, Min Soo;Seo, Young-Ho;Kim, Dong-Wook;Yoo, Ji-Sang
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.2 no.1
    • /
    • pp.27-35
    • /
    • 2013
  • This paper proposes a face tracking algorithm for a viewpoint adaptive multi-view synthesis system. The original scene captured by a depth camera contains a texture image and 8 bit gray-scale depth map. From this original image, multi-view images that correspond to the viewer's position can be synthesized using geometrical transformations, such as rotation and translation. The proposed face tracking technique gives a motion parallax cue by different viewpoints and view angles. In the proposed algorithm, the viewer's dominant face, which is established initially from a camera, can be tracked using the statistical characteristics of face colors and deformable templates. As a result, a motion parallax cue can be provided by detecting the viewer's dominant face area and tracking it, even under a heterogeneous background, and synthesized sequences can be displayed successfully.

  • PDF

Color-Based Real-Time Hand Region Detection with Robust Performance in Various Environments (다양한 환경에 강인한 컬러기반 실시간 손 영역 검출)

  • Hong, Dong-Gyun;Lee, Donghwa
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.14 no.6
    • /
    • pp.295-311
    • /
    • 2019
  • The smart product market is growing year by year and is being used in many areas. There are various ways of interacting with smart products and users by inputting voice recognition, touch and finger movements. It is most important to detect an accurate hand region as a whole step to recognize hand movement. In this paper, we propose a method to detect accurate hand region in real time in various environments. A conventional method of detecting a hand region includes a method using depth information of a multi-sensor camera, a method of detecting a hand through machine learning, and a method of detecting a hand region using a color model. Among these methods, a method using a multi-sensor camera or a method using a machine learning requires a large amount of calculation and a high-performance PC is essential. Many computations are not suitable for embedded systems, and high-end PCs increase or decrease the price of smart products. The algorithm proposed in this paper detects the hand region using the color model, corrects the problems of the existing hand detection algorithm, and detects the accurate hand region based on various experimental environments.

A vision based mobile robot travelling among obstructions

  • Ishigawa, Seiji;Gouhara, Kouichi;Kouichi-Ide;Kato, Kiyoshi
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1988.10b
    • /
    • pp.810-815
    • /
    • 1988
  • This paper presents a mobile robot that travels employing visual information. The mobile robot is equipped solely with a TV camera as a sensor, and views from the TV camera are transferred to a separately installed micro computer through an image acquisition device. An acquired image of a view is processed there and the information necessary for travel is yielded. Instructions based on the information are then sent from the micro computer to the mobile robot, which causes the mobile robot next action. Among several application programs that have already been developed for the mobile robot other than the entire control program, this paper focuses its attention on the travelling control of the mobile robot in a model environment with obstructions as well as an overview of the whole system. The behaviour the present mobile robot takes when it travels among obstructions was investigated by an experiment, and satisfactory results were obtained.

  • PDF

Performing Missions of a Minicar Using a Single Camera (단안 카메라를 이용한 소형 자동차의 임무 수행)

  • Kim, Jin-Woo;Ha, Jong-Eun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.12 no.1
    • /
    • pp.123-128
    • /
    • 2017
  • This paper deals with performing missions through autonomous navigation using camera and other sensors. Extracting pose of the car is necessary to navigate safely within the given road. Homography is used to find it. Color image is converted into grey image and thresholding and edge is used to find control points. Two control ponits are converted into world coordinates using homography to find the angle and position of the car. Color is used to find traffic signal. It was confirmed that the given tasks performed well through experiments.

A Study of Effective Method to Update the Database for Road Traffic Facilities Using Digital Image Processing and Pattern Recognition (수치영상처리 및 패턴 인식에 의한 도로교통시설물 DB의 효율적 갱신방안 연구)

  • Choi, Joon-Seog;Kang, Joon-Mook
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.20 no.2
    • /
    • pp.31-37
    • /
    • 2012
  • Because of road construction and expansion, Update of the road traffic facilities DB is steadily increased each year, and, Increasing drivers and cars, safety signs for traffic safety are required management and additional installation continuously. To update Safety Sign database promptly, we have developed auto recognition function of safety sign, and analyzed coordinates accuracy. The purpose of this study was to propose methods to update about road traffic facilities efficiently. For this purpose, omni-directional camera was calibrated for acquisition of 3-dimensional coordinates, integrated GPS/IMU/DMI system and applied image processing. In this experiment, we proposed a effective method to update database of road traffic facilities for digital map.