• Title/Summary/Keyword: Sensor 3D data model

Search Result 126, Processing Time 0.027 seconds

A Proposal of Sensor-based Time Series Classification Model using Explainable Convolutional Neural Network

  • Jang, Youngjun;Kim, Jiho;Lee, Hongchul
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.5
    • /
    • pp.55-67
    • /
    • 2022
  • Sensor data can provide fault diagnosis for equipment. However, the cause analysis for fault results of equipment is not often provided. In this study, we propose an explainable convolutional neural network framework for the sensor-based time series classification model. We used sensor-based time series dataset, acquired from vehicles equipped with sensors, and the Wafer dataset, acquired from manufacturing process. Moreover, we used Cycle Signal dataset, acquired from real world mechanical equipment, and for Data augmentation methods, scaling and jittering were used to train our deep learning models. In addition, our proposed classification models are convolutional neural network based models, FCN, 1D-CNN, and ResNet, to compare evaluations for each model. Our experimental results show that the ResNet provides promising results in the context of time series classification with accuracy and F1 Score reaching 95%, improved by 3% compared to the previous study. Furthermore, we propose XAI methods, Class Activation Map and Layer Visualization, to interpret the experiment result. XAI methods can visualize the time series interval that shows important factors for sensor data classification.

3D PROCESSING OF HIGH-RESOLUTION SATELLITE IMAGES

  • Gruen, Armin;Li, Zhang
    • Proceedings of the KSRS Conference
    • /
    • 2003.11a
    • /
    • pp.24-27
    • /
    • 2003
  • High-resolution satellite images at sub-5m footprint are becoming increasingly available to the earth observation community and their respective clients. The related cameras are all using linear array CCD technology for image sensing. The possibility and need for accurate 3D object reconstruction requires a sophisticated camera model, being able to deal with such sensor geometry. We have recently developed a full suite of new methods and software for the precision processing of this kind of data. The software can accommodate images from IKONOS, QuickBird, ALOS PRISM, SPOT5 HRS and sensors of similar type to be expected in the future. We will report about the status of the software, the functionality and some new algorithmic approaches in support of the processing concept. The functionality will be verified by results from various pilot projects. We put particular emphasis on the automatic generation of DSMs, which can be done at sub-pixel accuracy and on the semi-automated generation of city models.

  • PDF

A Study on the Photo-realistic 3D City Modeling Using the Omnidirectional Image and Digital Maps (전 방향 이미지와 디지털 맵을 활용한 3차원 실사 도시모델 생성 기법 연구)

  • Kim, Hyungki;Kang, Yuna;Han, Soonhung
    • Korean Journal of Computational Design and Engineering
    • /
    • v.19 no.3
    • /
    • pp.253-262
    • /
    • 2014
  • 3D city model, which consisted of the 3D building models and their geospatial position and orientation, is becoming a valuable resource in virtual reality, navigation systems, civil engineering, etc. The purpose of this research is to propose the new framework to generate the 3D city model that satisfies visual and physical requirements in ground oriented simulation system. At the same time, the framework should meet the demand of the automatic creation and cost-effectiveness, which facilitates the usability of the proposed approach. To do that, I suggest the framework that leverages the mobile mapping system which automatically gathers high resolution images and supplement sensor information like position and direction of the image. And to resolve the problem from the sensor noise and a large number of the occlusions, the fusion of digital map data will be used. This paper describes the overall framework with major process and the recommended or demanded techniques for each processing step.

A Study on the Application Technique of 3-D Spatial Information by integration of Aerial photos and Laser data (항공사진과 레이져 데이터의 통합에 의한 3 차원 공간정보 활용기술연구)

  • Yeon, Sang-Ho
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.3
    • /
    • pp.385-392
    • /
    • 2010
  • A LiDAR technique has the merits that survey engineers can get a large number of measurements with high precision quickly. Aerial photos and satellite sensor images are used for generating 3D spatial images which are matched with the map coordinates and elevation data from digital topographic files. Also, those images are used for matching with 3D spatial image contents through perspective view condition composed along to the designated roads until arrival the corresponding location. Recently, 3D aviation image could be generated by various digital data. The advanced geographical methods for guidance of the destination road are experimented under the GIS environments. More information and access designated are guided by the multimedia contents on internet or from the public tour information desk using the simulation images. The height data based on LiDAR is transformed into DEM, and the real time unification of the vector via digital image mapping and raster via extract evaluation are transformed to trace the generated model of 3-dimensional downtown building along to the long distance for 3D tract model generation.

3D Model Generation and Accuracy Evaluation using Unmanned Aerial Oblique Image (무인항공 경사사진을 이용한 3차원 모델 생성 및 정확도 평가)

  • Park, Joon-Kyu;Jung, Kap-Yong
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.20 no.3
    • /
    • pp.587-593
    • /
    • 2019
  • The field of geospatial information is rapidly changing due to the development of sensor and data processing technology that can acquire location information. And demand is increasing in various related industries and social activities. The construction and utilization of three dimensional geospatial information that is easy to understand and easy to understand can be an essential element to improve the quality and reliability of related services. In recent years, 3D laser scanners are widely used as 3D geospatial information construction technology. However, 3D laser scanners may cause shadow areas where data acquisition is not possible when objects are large in size or complex in shape. In this study, 3D model of an object has been created by acquiring oblique images using an unmanned aerial vehicle and processing the data. The study area was selected, oblique images were acquired using an unmanned aerial vehicle, and point cloud type 3D model with 0.02 m spacing was created through data processing. The accuracy of the 3D model was 0.19m and the average was 0.11m. In the future, if accuracy is evaluated according to shooting and data processing methods, and 3D model construction and accuracy evaluation and analysis according to camera types are performed, the accuracy of the 3D model will be improved. In the point cloud type 3D model, Cross section generation, drawing of objects, and so on, it is possible to improve work efficiency of spatial information service and related work.

Camera Calibration when the Accuracies of Camera Model and Data Are Uncertain (카메라 모델과 데이터의 정확도가 불확실한 상황에서의 카메라 보정)

  • Do, Yong-Tae
    • Journal of Sensor Science and Technology
    • /
    • v.13 no.1
    • /
    • pp.27-34
    • /
    • 2004
  • Camera calibration is an important and fundamental procedure for the application of a vision sensor to 3D problems. Recently many camera calibration methods have been proposed particularly in the area of robot vision. However, the reliability of data used in calibration has been seldomly considered in spite of its importance. In addition, a camera model can not guarantee good results consistently in various conditions. This paper proposes methods to overcome such uncertainty problems of data and camera models as we often encounter them in practical camera calibration steps. By the use of the RANSAC (Random Sample Consensus) algorithm, few data having excessive magnitudes of errors are excluded. Artificial neural networks combined in a two-step structure are trained to compensate for the result by a calibration method of a particular model in a given condition. The proposed methods are useful because they can be employed additionally to most existing camera calibration techniques if needed. We applied them to a linear camera calibration method and could get improved results.

A Study on the Determination of 3-D Object's Position Based on Computer Vision Method (컴퓨터 비젼 방법을 이용한 3차원 물체 위치 결정에 관한 연구)

  • 김경석
    • Journal of the Korean Society of Manufacturing Technology Engineers
    • /
    • v.8 no.6
    • /
    • pp.26-34
    • /
    • 1999
  • This study shows an alternative method for the determination of object's position, based on a computer vision method. This approach develops the vision system model to define the reciprocal relationship between the 3-D real space and 2-D image plane. The developed model involves the bilinear six-view parameters, which is estimated using the relationship between the camera space location and real coordinates of known position. Based on estimated parameters in independent cameras, the position of unknown object is accomplished using a sequential estimation scheme that permits data of unknown points in each of the 2-D image plane of cameras. This vision control methods the robust and reliable, which overcomes the difficulties of the conventional research such as precise calibration of the vision sensor, exact kinematic modeling of the robot, and correct knowledge of the relative positions and orientation of the robot and CCD camera. Finally, the developed vision control method is tested experimentally by performing determination of object position in the space using computer vision system. These results show the presented method is precise and compatible.

  • PDF

Building DSMs Generation Integrating Three Line Scanner (TLS) and LiDAR

  • Suh, Yong-Cheol;Nakagawa , Masafumi
    • Korean Journal of Remote Sensing
    • /
    • v.21 no.3
    • /
    • pp.229-242
    • /
    • 2005
  • Photogrammetry is a current method of GIS data acquisition. However, as a matter of fact, a large manpower and expenditure for making detailed 3D spatial information is required especially in urban areas where various buildings exist. There are no photogrammetric systems which can automate a process of spatial information acquisition completely. On the other hand, LiDAR has high potential of automating 3D spatial data acquisition because it can directly measure 3D coordinates of objects, but it is rather difficult to recognize the object with only LiDAR data, for its low resolution at this moment. With this background, we believe that it is very advantageous to integrate LiDAR data and stereo CCD images for more efficient and automated acquisition of the 3D spatial data with higher resolution. In this research, the automatic urban object recognition methodology was proposed by integrating ultra highresolution stereo images and LiDAR data. Moreover, a method to enable more reliable and detailed stereo matching method for CCD images was examined by using LiDAR data as an initial 3D data to determine the search range and to detect possibility of occlusions. Finally, intellectual DSMs, which were identified urban features with high resolution, were generated with high speed processing.

Dynamic Temperature Compensation System Development for the Accelerometer with Modified Spline Interpolation (Curve Fitting) (변형 스플라인 보간법(곡선맞춤)을 통한 가속도 센서의 동적 온도 보상 시스템 개발)

  • Lee, Hoochang;Go, Jaedoo;Yoo, Kwangho;Kim, Wanil
    • Transactions of the Korean Society of Automotive Engineers
    • /
    • v.22 no.3
    • /
    • pp.114-122
    • /
    • 2014
  • Sensor fusion is the one of the main research topics. It offers the highly reliable estimation of vehicle movement by processing and mixing several sensor outputs. But unfortunately, every sensor has drift which degrades the performance of sensor. It means a single degraded sensor output may affect whole sensor fusion system. Drift in most research is ideally assumed to be zero because it's usually a nonlinear model and has sample variation. Plus, it's very difficult for the acceleration to separate drift from the output signal since it contains many contributors such as vehicle acceleration, slope angle, pitch angle, surface condition and so on. In this paper, modified spline interpolation is introduced as a dynamic temperature compensation method covering sample variation. Using the last known output and the first initial output is suggested to build and update compensation factor. When the system has more compensation data, the system will have better performance of compensated output because of the regression compensation model. The performance of the dynamic temperature compensation system is evaluated by measuring offset drift between with and without the compensation.

Construction of Static 3D Ultrasonography Image by Radiation Beam Tracking Method from 1D Array Probe (1차원 배열 탐촉자의 방사빔추적기법을 이용한 정적 3차원 초음파진단영상 구성)

  • Kim, Yong Tae;Doh, Il;Ahn, Bongyoung;Kim, Kwang-Youn
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.35 no.2
    • /
    • pp.128-133
    • /
    • 2015
  • This paper describes the construction of a static 3D ultrasonography image by tracking the radiation beam position during the handy operation of a 1D array probe to enable point-of-care use. The theoretical model of the transformation from the translational and rotational information of the sensor mounted on the probe to the reference Cartesian coordinate system was given. The signal amplification and serial communication interface module was made using a commercially available sensor. A test phantom was also made using silicone putty in a donut shape. During the movement of the hand-held probe, B-mode movie and sensor signals were recorded. B-mode images were periodically selected from the movie, and the gray levels of the pixels for each image were converted to the gray levels of 3D voxels. 3D and 2D images of arbitrary cross-section of the B-mode type were also constructed from the voxel data, and agreed well with the shape of the test phantom.