• 제목/요약/키워드: Visual Sensing

Search Result 260, Processing Time 0.026 seconds

Visual Tracking of Objects for a Mobile Robot using Point Snake Algorithm

  • Kim, Won;Lee, Choon-Young;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.30-34
    • /
    • 1998
  • Path Planning is one of the important fields in robot technologies. Local path planning may be done in on-line modes while recognizing an environment of robot by itself. In dynamic environments to obtain fluent information for environments vision system as a sensing equipment is a one of the most necessary devices for safe and effective guidance of robots. If there is a predictor that tells what future sensing outputs will be, robot can respond to anticipated environmental changes in advance. The tracking of obstacles has a deep relationship to the prediction for safe navigation. We tried to deal with active contours, that is snakes, to find out the possibilities of stable tracking of objects in image plane. Snakes are defined based on energy functions, and can be deformed to a certain contour form which would converge to the minimum energy states by the forces produced from energy differences. By using point algorithm we could have more speedy convergence time because the Brent's method gives the solution to find the local minima fast. The snake algorithm may be applied to sequential image frames to track objects in the images by these characteristics of speedy convergence and robust edge detection ability.

  • PDF

Control of Mobile Robot Navigation Using Vision Sensor Data Fusion by Nonlinear Transformation (비선형 변환의 비젼센서 데이터융합을 이용한 이동로봇 주행제어)

  • Jin Tae-Seok;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.4
    • /
    • pp.304-313
    • /
    • 2005
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robot need to recognize his position and direction for intelligent performance in an unknown environment. And the mobile robots may navigate by means of a number of monitoring systems such as the sonar-sensing system or the visual-sensing system. Notice that in the conventional fusion schemes, the measurement is dependent on the current data sets only. Therefore, more of sensors are required to measure a certain physical parameter or to improve the accuracy of the measurement. However, in this research, instead of adding more sensors to the system, the temporal sequence of the data sets are stored and utilized for the accurate measurement. As a general approach of sensor fusion, a UT -Based Sensor Fusion(UTSF) scheme using Unscented Transformation(UT) is proposed for either joint or disjoint data structure and applied to the landmark identification for mobile robot navigation. Theoretical basis is illustrated by examples and the effectiveness is proved through the simulations and experiments. The newly proposed, UT-Based UTSF scheme is applied to the navigation of a mobile robot in an unstructured environment as well as structured environment, and its performance is verified by the computer simulation and the experiment.

Comparison of Image Merging Methods for Producing High-Spatial Resolution Multispectral Images (고해상도 다중분광영상 제작을 위한 합성방법의 비교)

  • 김윤형;이규성
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.1
    • /
    • pp.87-98
    • /
    • 2000
  • Image merging techniques have been developed to integrate the advantage of different data type. The objective of this study is to present the optimal method for merging high spatial resolution panchromatic image, such as the latest commercial satellite data, and low spatial resolution mulitspectral images. For this study, a set of 2m resolution panchromatic and 8m resolution mulitspectral data were simulated by using airborne mulitspectral data. Five merging methods of MWD, IHS, PCA, HPF, and CN were applied to produce four bands of high spatial resolution mulitspectral data. Merging results were evaluated by visual interpretation, image statistics, semivariogram, and spectral characteristics. From the aspects of both spatial resolution and spectral information, the wavelet-based MWD merging method have shown very similar results compared with the original data used for the merging.

Change Analysis of Forest Area and Canopy Conditions in Kaesung, North Korea Using Landsat, SPOT and KOMPSAT Data

  • Lee, Kyu-Sung;Kim, Jeong-Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.16 no.4
    • /
    • pp.327-338
    • /
    • 2000
  • The forest conditions of North Korea has been a great concern since it was known to be closely related to many environmental problems of the disastrous flooding, soil erosion, and food shortage. To assess the long-term changes of forest area as well as the canopy conditions, several sources of multitemporal satellite data were applied to the study area near Kaesung. KOMPSAT-1 EOC data were overlaid with 1981 topographic map showing the boundaries of forest to assess the deforestation area. Delineation of the cleared forest was performed by both visual interpretation and unsupervised classification. For analyzing the change of forest canopy condition, multiple scenes of Landsat and SPOT data were selected. After preprocessing of the multitemporal satellite data, such as image registration and normalization, the normalized difference vegetation index (NDVI) was derived as a representation of forest canopy conditions. Although the panchromatic EOC data had radiometric limitation to classify diverse cover types, they can be effectively used t detect and delineate the deforested area. The results showed that a large portion of forest land has been cleared for the urban and agricultural uses during the last twenty years. It was also found that the canopy condition of remaining forests has not been improved for the last twenty years. It was also found that the canopy condition of remaining forests has not been improved for the last twenty years. Possible causes of the deforestation and the temporal pattern of canopy conditions are discussed.

A Study on Aerial Triangulation from Multi-Sensor Imagery

  • Lee, Young-Ran;Habib, Ayman;Kim, Kyung-Ok
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.3
    • /
    • pp.255-261
    • /
    • 2003
  • Recently, the enormous increase in the volume of remotely sensed data is being acquired by an ever-growing number of earth observation satellites. The combining of diversely sourced imagery together is an important requirement in many applications such as data fusion, city modeling and object recognition. Aerial triangulation is a procedure to reconstruct object space from imagery. However, since the different kinds of imagery have their own sensor model, characteristics, and resolution, the previous approach in aerial triangulation (or georeferencing) is purformed on a sensor model separately. This study evaluated the advantages of aerial triangulation of large number of images from multi-sensors simultaneously. The incorporated multi-sensors are frame, push broom, and whisky broom cameras. The limits and problems of push-broom or whisky broom sensor models can be compensated by combined triangulation with other sensors The reconstructed object space from multi-sensor triangulation is more accurate than that from a single model. Experiments conducted in this study show the more accurately reconstructed object space from multi-sensor triangulation.

Automatic Identification of Fiducial Marks Based on Weak Constraints

  • Cho, Seong-Ik;Kim, Kyoung-Ok
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.1
    • /
    • pp.61-70
    • /
    • 2003
  • This paper proposes an autonomous approach to localize the center of fiducial marks included in aerial photographs without precise geometric information and human interactions. For this localization, we present a conceptual model based on two assumptions representing symmetric characteristics of fiducial area and fiducial mark. The model makes it possible to locate exact center of a fiducial mark by checking the symmetric characteristics of pixel value distribution around the mark. The proposed approach is composed of three steps: (a) determining the symmetric center of fiducial area, (b) finding the center of a fiducial mark with unit pixel accuracy, and finally (c) localizing the exact center up to sub-pixel accuracy. The symmetric center of the mark is calculated tv successively applying three geometric filters: simplified ${\nabla}^2$G (Laplacian of Gaussian) filter, symmetry enhancement filter, and high pass filter. By introducing a self-diagnosis function based on the self-similarity measurement, a way of rejecting unreliable cases of center calculation is proposed, as well. The experiments were done with respect to 284 samples of fiducial marks composed of RMK- and RC-style ones extracted from 51 scanned aerial photographs. It was evaluated in the visual inspection that the proposed approach had resulted the erroneous identification with respect to only one mark. Although the proposed approach is based on weak constraints, being free from the exact geometric model of the fiducial marks, experimental results showed that the proposed approach is sufficiently robust and reliable.

The Utilization of Google Earth Images as Reference Data for The Multitemporal Land Cover Classification with MODIS Data of North Korea

  • Cha, Su-Young;Park, Chong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.23 no.5
    • /
    • pp.483-491
    • /
    • 2007
  • One of the major obstacles to classify and validate Land Cover maps is the high cost of acquiring reference data. In case of inaccessible areas such as North Korea, the high resolution satellite imagery may be used for reference data. The objective of this paper is to investigate the possibility of utilizing QuickBird high resolution imagery of North Korea that can be obtained from Google Earth data via internet for reference data of land cover classification. Monthly MODIS NDVI data of nine months from the summer of 2004 were classified into L=54 cluster using ISODATA algorithm, and these L clusters were assigned to 7 classes - coniferous forest, deciduous forest, mixed forest, paddy field, dry field, water, and built-up areas - by careful use of reference data obtained through visual interpretation of the high resolution imagery. The overall accuracy and Kappa index were 85.98% and 0.82, respectively, which represents about 10% point increase of classification accuracy than our previous study based on GCP point data around North Korea. Thus we can conclude that Google Earth may be used to substitute the traditional reference data collection on the site where the accessibility is severely limited.

Improving Urban Vegetation Classification by Including Height Information Derived from High-Spatial Resolution Stereo Imagery

  • Myeong, Soo-Jeong
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.5
    • /
    • pp.383-392
    • /
    • 2005
  • Vegetation classes, especially grass and tree classes, are often confused in classification when conventional spectral pattern recognition techniques are used to classify urban areas. This paper reports on a study to improve the classification results by using an automated process of considering height information in separating urban vegetation classes, specifically tree and grass, using three-band, high-spatial resolution, digital aerial imagery. Height information was derived photogrammetrically from stereo pair imagery using cross correlation image matching to estimate differential parallax for vegetation pixels. A threshold value of differential parallax was used to assess whether the original class was correct. The average increase in overall accuracy for three test stereo pairs was $7.8\%$, and detailed examination showed that pixels reclassified as grass improved the overall accuracy more than pixels reclassified as tree. Visual examination and statistical accuracy assessment of four test areas showed improvement in vegetation classification with the increase in accuracy ranging from $3.7\%\;to\;18.1\%$. Vegetation classification can, in fact, be improved by adding height information to the classification procedure.

Cloud Cover Analysis from the GMS/S-VISSR Imagery Using Bispectral Thresholds Technique (GMS/S-VISSR 자료로부터 Bispectral Thresholds 기법을 이용한 운량 분석에 관하여)

  • 서명석;박경윤
    • Korean Journal of Remote Sensing
    • /
    • v.9 no.1
    • /
    • pp.1-19
    • /
    • 1993
  • A simple bispectral threshold technique which reflects the temporal and spatial characteristics of the analysis area has been developed to classify the cloud type and estimate the cloud cover from GMS/S-VISSR(Stretched Visible and Infrared Spin Scan Radiometer) imagery. In this research, we divided the analysis area into land and sea to consider their different optical properties and used the same time observation data to exclude the solar zenith angle effects included in the raw data. Statistical clear sky radiance(CSRs) was constructed using maximum brightness temperature and minimum albedo from the S-VISSR imagery data during consecutive two weeks. The CSR used in the cloud anaysis was updated on the daily basis by using CSRs, the standard deviation of CSRs and present raw data to reflect the daily variation of temperature. Thresholds were applied to classify the cloud type and estimate the cloud cover from GMS/S-VISST imagery. We used a different thresholds according to the earth surface type and the thresholds were enough to resolve the spatial variation of brightness temperature and the noise in raw data. To classify the ambiguous pixels, we used the time series of 2-D histogram and local standard deviation, and the results showed a little improvements. Visual comparisons among the present research results, KMA's manual analysis and observed sea level charts showed a good agreement in quality.

Aerial Dataset Integration For Vehicle Detection Based on YOLOv4

  • Omar, Wael;Oh, Youngon;Chung, Jinwoo;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.4
    • /
    • pp.747-761
    • /
    • 2021
  • With the increasing application of UAVs in intelligent transportation systems, vehicle detection for aerial images has become an essential engineering technology and has academic research significance. In this paper, a vehicle detection method for aerial images based on the YOLOv4 deep learning algorithm is presented. At present, the most known datasets are VOC (The PASCAL Visual Object Classes Challenge), ImageNet, and COCO (Microsoft Common Objects in Context), which comply with the vehicle detection from UAV. An integrated dataset not only reflects its quantity and photo quality but also its diversity which affects the detection accuracy. The method integrates three public aerial image datasets VAID, UAVD, DOTA suitable for YOLOv4. The training model presents good test results especially for small objects, rotating objects, as well as compact and dense objects, and meets the real-time detection requirements. For future work, we will integrate one more aerial image dataset acquired by our lab to increase the number and diversity of training samples, at the same time, while meeting the real-time requirements.