• Title/Summary/Keyword: Automatic Data Extraction

Search Result 309, Processing Time 0.024 seconds

Automatic Detection of Dead Trees Based on Lightweight YOLOv4 and UAV Imagery

  • Yuanhang Jin;Maolin Xu;Jiayuan Zheng
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.614-630
    • /
    • 2023
  • Dead trees significantly impact forest production and the ecological environment and pose constraints to the sustainable development of forests. A lightweight YOLOv4 dead tree detection algorithm based on unmanned aerial vehicle images is proposed to address current limitations in dead tree detection that rely mainly on inefficient, unsafe and easy-to-miss manual inspections. An improved logarithmic transformation method was developed in data pre-processing to display tree features in the shadows. For the model structure, the original CSPDarkNet-53 backbone feature extraction network was replaced by MobileNetV3. Some of the standard convolutional blocks in the original extraction network were replaced by depthwise separable convolution blocks. The new ReLU6 activation function replaced the original LeakyReLU activation function to make the network more robust for low-precision computations. The K-means++ clustering method was also integrated to generate anchor boxes that are more suitable for the dataset. The experimental results show that the improved algorithm achieved an accuracy of 97.33%, higher than other methods. The detection speed of the proposed approach is higher than that of YOLOv4, improving the efficiency and accuracy of the detection process.

SuperDepthTransfer: Depth Extraction from Image Using Instance-Based Learning with Superpixels

  • Zhu, Yuesheng;Jiang, Yifeng;Huang, Zhuandi;Luo, Guibo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.10
    • /
    • pp.4968-4986
    • /
    • 2017
  • In this paper, we primarily address the difficulty of automatic generation of a plausible depth map from a single image in an unstructured environment. The aim is to extrapolate a depth map with a more correct, rich, and distinct depth order, which is both quantitatively accurate as well as visually pleasing. Our technique, which is fundamentally based on a preexisting DepthTransfer algorithm, transfers depth information at the level of superpixels. This occurs within a framework that replaces a pixel basis with one of instance-based learning. A vital superpixels feature enhancing matching precision is posterior incorporation of predictive semantic labels into the depth extraction procedure. Finally, a modified Cross Bilateral Filter is leveraged to augment the final depth field. For training and evaluation, experiments were conducted using the Make3D Range Image Dataset and vividly demonstrate that this depth estimation method outperforms state-of-the-art methods for the correlation coefficient metric, mean log10 error and root mean squared error, and achieves comparable performance for the average relative error metric in both efficacy and computational efficiency. This approach can be utilized to automatically convert 2D images into stereo for 3D visualization, producing anaglyph images that are visually superior in realism and simultaneously more immersive.

A new Clustering Algorithm for GPS Trajectories with Maximum Overlap Interval (최대 중첩구간을 이용한 새로운 GPS 궤적 클러스터링)

  • Kim, Taeyong;Park, Bokuk;Park, Jinkwan;Cho, Hwan-Gue
    • KIISE Transactions on Computing Practices
    • /
    • v.22 no.9
    • /
    • pp.419-425
    • /
    • 2016
  • In navigator systems, keeping map data up-to-date is an important task. Manual update involves a substantial cost and it is difficult to achieve immediate reflection of changes with manual updates. In this paper, we present a method for trajectory-center extraction, which is essential for automatic road map generation with GPS data. Though clustered trajectories are necessary to extract the center road, real trajectories are not clustered. To address this problem, this paper proposes a new method using the maximum overlapping interval and trajectory clustering. Finally, we apply the Virtual Running method to extract the center road from the clustered trajectories. We conducted experiments on real massive taxi GPS data sets collected throughout Gang-Nam-Gu, Sung-Nam city and all parts of Seoul city. Experimental results showed that our method is stable and efficient for extracting the center trajectory of real roads.

Automatic Extraction of Training Data Based on Semi-supervised Learning for Time-series Land-cover Mapping (시계열 토지피복도 제작을 위한 준감독학습 기반의 훈련자료 자동 추출)

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.461-469
    • /
    • 2022
  • This paper presents a novel training data extraction approach using semi-supervised learning (SSL)-based classification without the analyst intervention for time-series land-cover mapping. The SSL-based approach first performs initial classification using initial training data obtained from past images including land-cover characteristics similar to the image to be classified. Reliable training data from the initial classification result are then extracted from SSL-based iterative classification using classification uncertainty information and class labels of neighboring pixels as constraints. The potential of the SSL-based training data extraction approach was evaluated from a classification experiment using unmanned aerial vehicle images in croplands. The use of new training data automatically extracted by the proposed SSL approach could significantly alleviate the misclassification in the initial classification result. In particular, isolated pixels were substantially reduced by considering spatial contextual information from adjacent pixels. Consequently, the classification accuracy of the proposed approach was similar to that of classification using manually extracted training data. These results indicate that the SSL-based iterative classification presented in this study could be effectively applied to automatically extract reliable training data for time-series land-cover mapping.

A Study on the Data Extraction and Formalization for the Generation of Structural Analysis Model from Ship Design Data (선체 구조설계로부터 구조해석 모델 생성에 필요한 데이타의 추출과 정형화에 관한 연구)

  • Jae-Hwan Lee;Yong-Dae Kim
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.30 no.3
    • /
    • pp.90-99
    • /
    • 1993
  • As the finite element method has become a considerable and effective design tool in ship structural analysis, modeling of three dimensional finite element mesh is more necessary than before. However, the unique style and complexity of a ship usually make the modeling be hard and costly. Although most pre-processor of FEM software and geometric modeler provides modeling function, the capability is quite limited for complicated structure. In order to perform FEM modeling quickly, it is necessary to extract, rearrange, and formalize data from ship design database for partially automatic mesh generation. In this paper, the process of designing relational data tables from design data is shown as a part of analysis automation with the application of engineering database concept.

  • PDF

Segmentation of Airborne LIDAR Data: From Points to Patches (항공 라이다 데이터의 분할: 점에서 패치로)

  • Lee Im-Pyeong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.1
    • /
    • pp.111-121
    • /
    • 2006
  • Recently, many studies have been performed to apply airborne LIDAR data to extracting urban models. In order to model efficiently the man-made objects which are the main components of these urban models, it is important to extract automatically planar patches from the set of the measured three-dimensional points. Although some research has been carried out for their automatic extraction, no method published yet is sufficiently satisfied in terms of the accuracy and completeness of the segmentation results and their computational efficiency. This study thus aimed to developing an efficient approach to automatic segmentation of planar patches from the three-dimensional points acquired by an airborne LIDAR system. The proposed method consists of establishing adjacency between three-dimensional points, grouping small number of points into seed patches, and growing the seed patches into surface patches. The core features of this method are to improve the segmentation results by employing the variable threshold value repeatedly updated through a statistical analysis during the patch growing process, and to achieve high computational efficiency using priority heaps and sequential least squares adjustment. The proposed method was applied to real LIDAR data to evaluate the performance. Using the proposed method, LIDAR data composed of huge number of three dimensional points can be converted into a set of surface patches which are more explicit and robust descriptions. This intermediate converting process can be effectively used to solve object recognition problems such as building extraction.

Automatic Quality Evaluation with Completeness and Succinctness for Text Summarization (완전성과 간결성을 고려한 텍스트 요약 품질의 자동 평가 기법)

  • Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.125-148
    • /
    • 2018
  • Recently, as the demand for big data analysis increases, cases of analyzing unstructured data and using the results are also increasing. Among the various types of unstructured data, text is used as a means of communicating information in almost all fields. In addition, many analysts are interested in the amount of data is very large and relatively easy to collect compared to other unstructured and structured data. Among the various text analysis applications, document classification which classifies documents into predetermined categories, topic modeling which extracts major topics from a large number of documents, sentimental analysis or opinion mining that identifies emotions or opinions contained in texts, and Text Summarization which summarize the main contents from one document or several documents have been actively studied. Especially, the text summarization technique is actively applied in the business through the news summary service, the privacy policy summary service, ect. In addition, much research has been done in academia in accordance with the extraction approach which provides the main elements of the document selectively and the abstraction approach which extracts the elements of the document and composes new sentences by combining them. However, the technique of evaluating the quality of automatically summarized documents has not made much progress compared to the technique of automatic text summarization. Most of existing studies dealing with the quality evaluation of summarization were carried out manual summarization of document, using them as reference documents, and measuring the similarity between the automatic summary and reference document. Specifically, automatic summarization is performed through various techniques from full text, and comparison with reference document, which is an ideal summary document, is performed for measuring the quality of automatic summarization. Reference documents are provided in two major ways, the most common way is manual summarization, in which a person creates an ideal summary by hand. Since this method requires human intervention in the process of preparing the summary, it takes a lot of time and cost to write the summary, and there is a limitation that the evaluation result may be different depending on the subject of the summarizer. Therefore, in order to overcome these limitations, attempts have been made to measure the quality of summary documents without human intervention. On the other hand, as a representative attempt to overcome these limitations, a method has been recently devised to reduce the size of the full text and to measure the similarity of the reduced full text and the automatic summary. In this method, the more frequent term in the full text appears in the summary, the better the quality of the summary. However, since summarization essentially means minimizing a lot of content while minimizing content omissions, it is unreasonable to say that a "good summary" based on only frequency always means a "good summary" in its essential meaning. In order to overcome the limitations of this previous study of summarization evaluation, this study proposes an automatic quality evaluation for text summarization method based on the essential meaning of summarization. Specifically, the concept of succinctness is defined as an element indicating how few duplicated contents among the sentences of the summary, and completeness is defined as an element that indicating how few of the contents are not included in the summary. In this paper, we propose a method for automatic quality evaluation of text summarization based on the concepts of succinctness and completeness. In order to evaluate the practical applicability of the proposed methodology, 29,671 sentences were extracted from TripAdvisor 's hotel reviews, summarized the reviews by each hotel and presented the results of the experiments conducted on evaluation of the quality of summaries in accordance to the proposed methodology. It also provides a way to integrate the completeness and succinctness in the trade-off relationship into the F-Score, and propose a method to perform the optimal summarization by changing the threshold of the sentence similarity.

Knowledge-based Video Retrieval System Using Korean Closed-caption (한국어 폐쇄자막을 이용한 지식기반 비디오 검색 시스템)

  • 조정원;정승도;최병욱
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.41 no.3
    • /
    • pp.115-124
    • /
    • 2004
  • The content-based retrieval using low-level features can hardly provide the retrieval result that corresponds with conceptual demand of user for intelligent retrieval. Video includes not only moving picture data, but also audio or closed-caption data. Knowledge-based video retrieval is able to provide the retrieval result that corresponds with conceptual demand of user because of performing automatic indexing with such a variety data. In this paper, we present the knowledge-based video retrieval system using Korean closed-caption. The closed-caption is indexed by Korean keyword extraction system including the morphological analysis process. As a result, we are able to retrieve the video by using keyword from the indexing database. In the experiment, we have applied the proposed method to news video with closed-caption generated by Korean stenographic system, and have empirically confirmed that the proposed method provides the retrieval result that corresponds with more meaningful conceptual demand of user.

Development of a Robot Performance Evaluation System Using Leica LTD 500 Laser Tracker (레이저 트랙커(Leica LTD 500)를 이용한 로봇 성능 평가 시스템 개발)

  • Kim Mi-Kyung;Yoon Cheon-Seok;Kang Hee-Jun;Seo Yeong-Su;Ro Young-Shick;Son Hong-Rae
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2005.06a
    • /
    • pp.1001-1006
    • /
    • 2005
  • A Robot Performance Evaluation System(RPES) with the laser tracker Leica LTD 500 was developed according to the ISO 9283 robot performance criteria. The developed system is set up a test robot to continuously move the prescribed cyclic trajectories without a human intervention and the laser tracker to simultaneously measure the robot's movement. And then, the system automatically extracts the required data from the tremendous measured data, and computes the various performance criteria which represents the present state of the test robot's performance. This paper explains how ISO 9283 robot performance criteria was used for the developed system, and suggests a automatic data extraction algorithm from the mass of measured data. And also, a user-friendly Robot Performance Evaluation System(RPES) Software was developed with Visual Basic satisfying the need of Hyundai Motor Company. The developed system was implemented on NACHI 8608 AM 11 robot. The resulted output shows the effectiveness of the developed system.

  • PDF

Simulation Based Performance Assessment of a LIDAR Data Segmentation Algorithm (라이다데이터 분할 알고리즘의 시뮬레이션 기반 성능평가)

  • Kim, Seong-Joon;Lee, Im-Pyeong
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.119-129
    • /
    • 2010
  • Many algorithms for processing LIDAR data have been developed for diverse applications not limited to patch segmentation, bare-earth filtering and building extraction. However, since we cannot exactly know the true locations of individual LIDAR points, it is difficult to assess the performance of a LIDAR data processing algorithm. In this paper, we thus attempted the performance assessment of the segmentation algorithm developed by Lee (2006) using the LIDAR data generated through simulation based on sensor modelling. Consequently, based on simulation, we can perform the performance assessment of a LIDAR processing algorithm more objectively and quantitatively with an automatic procedure.