• Title/Summary/Keyword: Automatic Data Extraction

Search Result 309, Processing Time 0.025 seconds

GIS based Development of Module and Algorithm for Automatic Catchment Delineation Using Korean Reach File (GIS 기반의 하천망분석도 집수구역 자동 분할을 위한 알고리듬 및 모듈 개발)

  • PARK, Yong-Gil;KIM, Kye-Hyun;YOO, Jae-Hyun
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.20 no.4
    • /
    • pp.126-138
    • /
    • 2017
  • Recently, the national interest in environment is increasing and for dealing with water environment-related issues swiftly and accurately, the demand to facilitate the analysis of water environment data using a GIS is growing. To meet such growing demands, a spatial network data-based stream network analysis map(Korean Reach File; KRF) supporting spatial analysis of water environment data was developed and is being provided. However, there is a difficulty in delineating catchment areas, which are the basis of supplying spatial data including relevant information frequently required by the users such as establishing remediation measures against water pollution accidents. Therefore, in this study, the development of a computer program was made. The development process included steps such as designing a delineation method, and developing an algorithm and modules. DEM(Digital Elevation Model) and FDR(Flow Direction) were used as the major data to automatically delineate catchment areas. The algorithm for the delineation of catchment areas was developed through three stages; catchment area grid extraction, boundary point extraction, and boundary line division. Also, an add-in catchment area delineation module, based on ArcGIS from ESRI, was developed in the consideration of productivity and utility of the program. Using the developed program, the catchment areas were delineated and they were compared to the catchment areas currently used by the government. The results showed that the catchment areas were delineated efficiently using the digital elevation data. Especially, in the regions with clear topographical slopes, they were delineated accurately and swiftly. Although in some regions with flat fields of paddles and downtowns or well-organized drainage facilities, the catchment areas were not segmented accurately, the program definitely reduce the processing time to delineate existing catchment areas. In the future, more efforts should be made to enhance current algorithm to facilitate the use of the higher precision of digital elevation data, and furthermore reducing the calculation time for processing large data volume.

A semi-automated method for integrating textural and material data into as-built BIM using TIS

  • Zabin, Asem;Khalil, Baha;Ali, Tarig;Abdalla, Jamal A.;Elaksher, Ahmed
    • Advances in Computational Design
    • /
    • v.5 no.2
    • /
    • pp.127-146
    • /
    • 2020
  • Building Information Modeling (BIM) is increasingly used throughout the facility's life cycle for various applications, such as design, construction, facility management, and maintenance. For existing buildings, the geometry of as-built BIM is often constructed using dense, three dimensional (3D) point clouds data obtained with laser scanners. Traditionally, as-built BIM systems do not contain the material and textural information of the buildings' elements. This paper presents a semi-automatic method for generation of material and texture rich as-built BIM. The method captures and integrates material and textural information of building elements into as-built BIM using thermal infrared sensing (TIS). The proposed method uses TIS to capture thermal images of the interior walls of an existing building. These images are then processed to extract the interior walls using a segmentation algorithm. The digital numbers in the resulted images are then transformed into radiance values that represent the emitted thermal infrared radiation. Machine learning techniques are then applied to build a correlation between the radiance values and the material type in each image. The radiance values were used to extract textural information from the images. The extracted textural and material information are then robustly integrated into the as-built BIM providing the data needed for the assessment of building conditions in general including energy efficiency, among others.

Spatial Image Information Generation of Rock Wall by Automatic Focal Length Extraction System (초점거리 자동추출 시스템에 의한 암벽의 공간영상정보 생성)

  • Lee, Jae-Kee;Lee, Kye-Dong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.5
    • /
    • pp.427-436
    • /
    • 2007
  • Because the slope made up the construction of any other facilities, has many risks of a collapse, existing inspection methods to collect information for a construction site of slope bring up a long time of inspection period, cost and approach for a measuring instrument and it presents the critical point of collecting materials. For getting images to use zoom lens in any positions this study will use free zoomer constructed values of data classified by the focal length develop Image Loader system to make it load not only camera information but also camera test data values of the focal length took a photograph automatically if it measure to use a variety of cameras or other lens. Also, as it constructs three dimensions spatial image information from images of obtained objects this study presents effective basic materials of slope surveying and inspection and it shows exact surveying methods for dangerous slope not to access.

Multi-camera System Calibration with Built-in Relative Orientation Constraints (Part 2) Automation, Implementation, and Experimental Results

  • Lari, Zahra;Habib, Ayman;Mazaheri, Mehdi;Al-Durgham, Kaleel
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.3
    • /
    • pp.205-216
    • /
    • 2014
  • Multi-camera systems have been widely used as cost-effective tools for the collection of geospatial data for various applications. In order to fully achieve the potential accuracy of these systems for object space reconstruction, careful system calibration should be carried out prior to data collection. Since the structural integrity of the involved cameras' components and system mounting parameters cannot be guaranteed over time, multi-camera system should be frequently calibrated to confirm the stability of the estimated parameters. Therefore, automated techniques are needed to facilitate and speed up the system calibration procedure. The automation of the multi-camera system calibration approach, which was proposed in the first part of this paper, is contingent on the automated detection, localization, and identification of the object space signalized targets in the images. In this paper, the automation of the proposed camera calibration procedure through automatic target extraction and labelling approaches will be presented. The introduced automated system calibration procedure is then implemented for a newly-developed multi-camera system while considering the optimum configuration for the data collection. Experimental results from the implemented system calibration procedure are finally presented to verify the feasibility the proposed automated procedure. Qualitative and quantitative evaluation of the estimated system calibration parameters from two-calibration sessions is also presented to confirm the stability of the cameras' interior orientation and system mounting parameters.

Deep Learning-based Approach for Classification of Tribological Time Series Data for Hand Creams (딥러닝을 이용한 핸드크림의 마찰 시계열 데이터 분류)

  • Kim, Ji Won;Lee, You Min;Han, Shawn;Kim, Kyeongtaek
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.44 no.3
    • /
    • pp.98-105
    • /
    • 2021
  • The sensory stimulation of a cosmetic product has been deemed to be an ancillary aspect until a decade ago. That point of view has drastically changed on different levels in just a decade. Nowadays cosmetic formulators should unavoidably meet the needs of consumers who want sensory satisfaction, although they do not have much time for new product development. The selection of new products from candidate products largely depend on the panel of human sensory experts. As new product development cycle time decreases, the formulators wanted to find systematic tools that are required to filter candidate products into a short list. Traditional statistical analysis on most physical property tests for the products including tribology tests and rheology tests, do not give any sound foundation for filtering candidate products. In this paper, we suggest a deep learning-based analysis method to identify hand cream products by raw electric signals from tribological sliding test. We compare the result of the deep learning-based method using raw data as input with the results of several machine learning-based analysis methods using manually extracted features as input. Among them, ResNet that is a deep learning model proved to be the best method to identify hand cream used in the test. According to our search in the scientific reported papers, this is the first attempt for predicting test cosmetic product with only raw time-series friction data without any manual feature extraction. Automatic product identification capability without manually extracted features can be used to narrow down the list of the newly developed candidate products.

Prerequisite Research for the Development of an End-to-End System for Automatic Tooth Segmentation: A Deep Learning-Based Reference Point Setting Algorithm (자동 치아 분할용 종단 간 시스템 개발을 위한 선결 연구: 딥러닝 기반 기준점 설정 알고리즘)

  • Kyungdeok Seo;Sena Lee;Yongkyu Jin;Sejung Yang
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.5
    • /
    • pp.346-353
    • /
    • 2023
  • In this paper, we propose an innovative approach that leverages deep learning to find optimal reference points for achieving precise tooth segmentation in three-dimensional tooth point cloud data. A dataset consisting of 350 aligned maxillary and mandibular cloud data was used as input, and both end coordinates of individual teeth were used as correct answers. A two-dimensional image was created by projecting the rendered point cloud data along the Z-axis, where an image of individual teeth was created using an object detection algorithm. The proposed algorithm is designed by adding various modules to the Unet model that allow effective learning of a narrow range, and detects both end points of the tooth using the generated tooth image. In the evaluation using DSC, Euclid distance, and MAE as indicators, we achieved superior performance compared to other Unet-based models. In future research, we will develop an algorithm to find the reference point of the point cloud by back-projecting the reference point detected in the image in three dimensions, and based on this, we will develop an algorithm to divide the teeth individually in the point cloud through image processing techniques.

Application Possibility of Control Points Extracted from Ortho Images and DTED Level 2 for High Resolution Satellite Sensor Modeling (정사영상과 DTED Level 2 자료에서 자동 추출한 지상기준점의 IKONOS 위성영상 모델링 적용 가능성 연구)

  • Lee, Tae-Yoon;Kim, Tae-Jung;Park, Wan-Yong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.15 no.4
    • /
    • pp.103-109
    • /
    • 2007
  • Ortho images and Digital Elevation Model (DEM) have been applied in various fields. It is necessary to acquire Ground Control Points (GCPs) for processing high resolution satellite images. However surveying GCPs require many time and expense. This study was performed to investigate whether GCPs automatically extracted from ortho images and DTED Level 2 can be applied to sensor modeling for high resolution satellite images. We analyzed the performance of the sensor model established by GCPs extracted automatically. We acquired GCPs by matching satellite image against ortho images. We included the height acquired from DTED Level 2 data in these GCPs. The spatial resolution of the DTED Level 2 data is about 30m. Absolution accuracy of this data is below 18m above MSL. The spatial resolution of ortho image is 1m. We established sensor model from IKONOS images using GCPs extracted automatically and generated DEMs from the images. The accuracy of sensor modeling is about $4{\sim}5$ pixel. We also established sensor models using GCPs acquired based on GPS surveying and generated DEMs. Two DEMs were similar. The RMSE of height from the DEM by automatic GCPs and DTED Level 2 is about 9 m. So we think that GCPs by DTED Level 2 and ortho image can use for IKONOS sensor modeling.

  • PDF

Analysis of Shadow Effect on High Resolution Satellite Image Matching in Urban Area (도심지역의 고해상도 위성영상 정합에 대한 그림자 영향 분석)

  • Yeom, Jun Ho;Han, You Kyung;Kim, Yong Il
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.21 no.2
    • /
    • pp.93-98
    • /
    • 2013
  • Multi-temporal high resolution satellite images are essential data for efficient city analysis and monitoring. Yet even when acquired from the same location, identical sensors as well as different sensors, these multi-temporal images have a geometric inconsistency. Matching points between images, therefore, must be extracted to match the images. With images of an urban area, however, it is difficult to extract matching points accurately because buildings, trees, bridges, and other artificial objects cause shadows over a wide area, which have different intensities and directions in multi-temporal images. In this study, we analyze a shadow effect on image matching of high resolution satellite images in urban area using Scale-Invariant Feature Transform(SIFT), the representative matching points extraction method, and automatic shadow extraction method. The shadow segments are extracted using spatial and spectral attributes derived from the image segmentation. Also, we consider information of shadow adjacency with the building edge buffer. SIFT matching points extracted from shadow segments are eliminated from matching point pairs and then image matching is performed. Finally, we evaluate the quality of matching points and image matching results, visually and quantitatively, for the analysis of shadow effect on image matching of high resolution satellite image.

Extraction of Agricultural Land Use and Crop Growth Information using KOMPSAT-3 Resolution Satellite Image (KOMPSAT-3급 위성영상을 이용한 농업 토지이용 및 작물 생육정보 추출)

  • Lee, Mi-Seon;Kim, Seong-Joon;Shin, Hyoung-Sub;Park, Jin-Ki;Park, Jong-Hwa
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.5
    • /
    • pp.411-421
    • /
    • 2009
  • This study refers to develop a semi-automatic extraction of agricultural land use and vegetation information using high resolution satellite images. Data of IKONOS-2 satellite images (May 25 of 2001, December 25 of 2001, and October 23 of 2003), QuickBird-2 satellite images (May 1 of 2006 and November 17 of 2004) and KOMPSAT-2 satellite image (September 17 of 2007) which resemble with the spatial resolution and spectral characteristics of KOMPSAT-3 were used. The precise agricultural land use classification was tried using ISODATA unsupervised classification technique, and the result was compared with on-screen digitizing land use accompanying with field investigation. For the extraction of crop growth information, three crops of paddy, com and red pepper were selected, and the spectral characteristics were collected during each growing period using ground spectroradiometer. The vegetation indices viz. RVI, NDVI, ARVI, and SAVI for the crops were evaluated. The evaluation process was developed using the ERDAS IMAGINE Spatial Modeler Tool.

A Dynamic Management Method for FOAF Using RSS and OLAP cube (RSS와 OLAP 큐브를 이용한 FOAF의 동적 관리 기법)

  • Sohn, Jong-Soo;Chung, In-Jeong
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.2
    • /
    • pp.39-60
    • /
    • 2011
  • Since the introduction of web 2.0 technology, social network service has been recognized as the foundation of an important future information technology. The advent of web 2.0 has led to the change of content creators. In the existing web, content creators are service providers, whereas they have changed into service users in the recent web. Users share experiences with other users improving contents quality, thereby it has increased the importance of social network. As a result, diverse forms of social network service have been emerged from relations and experiences of users. Social network is a network to construct and express social relations among people who share interests and activities. Today's social network service has not merely confined itself to showing user interactions, but it has also developed into a level in which content generation and evaluation are interacting with each other. As the volume of contents generated from social network service and the number of connections between users have drastically increased, the social network extraction method becomes more complicated. Consequently the following problems for the social network extraction arise. First problem lies in insufficiency of representational power of object in the social network. Second problem is incapability of expressional power in the diverse connections among users. Third problem is the difficulty of creating dynamic change in the social network due to change in user interests. And lastly, lack of method capable of integrating and processing data efficiently in the heterogeneous distributed computing environment. The first and last problems can be solved by using FOAF, a tool for describing ontology-based user profiles for construction of social network. However, solving second and third problems require a novel technology to reflect dynamic change of user interests and relations. In this paper, we propose a novel method to overcome the above problems of existing social network extraction method by applying FOAF (a tool for describing user profiles) and RSS (a literary web work publishing mechanism) to OLAP system in order to dynamically innovate and manage FOAF. We employed data interoperability which is an important characteristic of FOAF in this paper. Next we used RSS to reflect such changes as time flow and user interests. RSS, a tool for literary web work, provides standard vocabulary for distribution at web sites and contents in the form of RDF/XML. In this paper, we collect personal information and relations of users by utilizing FOAF. We also collect user contents by utilizing RSS. Finally, collected data is inserted into the database by star schema. The system we proposed in this paper generates OLAP cube using data in the database. 'Dynamic FOAF Management Algorithm' processes generated OLAP cube. Dynamic FOAF Management Algorithm consists of two functions: one is find_id_interest() and the other is find_relation (). Find_id_interest() is used to extract user interests during the input period, and find-relation() extracts users matching user interests. Finally, the proposed system reconstructs FOAF by reflecting extracted relationships and interests of users. For the justification of the suggested idea, we showed the implemented result together with its analysis. We used C# language and MS-SQL database, and input FOAF and RSS as data collected from livejournal.com. The implemented result shows that foaf : interest of users has reached an average of 19 percent increase for four weeks. In proportion to the increased foaf : interest change, the number of foaf : knows of users has grown an average of 9 percent for four weeks. As we use FOAF and RSS as basic data which have a wide support in web 2.0 and social network service, we have a definite advantage in utilizing user data distributed in the diverse web sites and services regardless of language and types of computer. By using suggested method in this paper, we can provide better services coping with the rapid change of user interests with the automatic application of FOAF.