• Title/Summary/Keyword: extracting methods

Search Result 954, Processing Time 0.034 seconds

Automatic Extraction of Roof Components from LiDAR Data Based on Octree Segmentation (LiDAR 데이터를 이용한 옥트리 분할 기반의 지붕요소 자동추출)

  • Song, Nak-Hyeon;Cho, Hong-Beom;Cho, Woo-Sug;Shin, Sung-Woong
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.25 no.4
    • /
    • pp.327-336
    • /
    • 2007
  • The 3D building modeling is one of crucial components in building 3D geospatial information. The existing methods for 3D building modeling depend mainly on manual photogrammetric processes by stereoplotter compiler, which indeed take great amount of time and efforts. In addition, some automatic methods that were proposed in research papers and experimental trials have limitations of describing the details of buildings with lack of geometric accuracy. It is essential in automatic fashion that the boundary and shape of buildings should be drawn effortlessly by a sophisticated algorithm. In recent years, airborne LiDAR data representing earth surface in 3D has been utilized in many different fields. However, it is still in technical difficulties for clean and correct boundary extraction without human intervention. The usage of airborne LiDAR data will be much feasible to reconstruct the roof tops of buildings whose boundary lines could be taken out from existing digital maps. The paper proposed a method to reconstruct the roof tops of buildings using airborne LiDAR data with building boundary lines from digital map. The primary process is to perform octree-based segmentation to airborne LiDAR data recursively in 3D space till there are no more airborne LiDAR points to be segmented. Once the octree-based segmentation has been completed, each segmented patch is thereafter merged based on geometric spatial characteristics. The experimental results showed that the proposed method were capable of extracting various building roof components such as plane, gable, polyhedric and curved surface.

Deep Learning-based SISR (Single Image Super Resolution) Method using RDB (Residual Dense Block) and Wavelet Prediction Network (RDB 및 웨이블릿 예측 네트워크 기반 단일 영상을 위한 심층 학습기반 초해상도 기법)

  • NGUYEN, HUU DUNG;Kim, Eung-Tae
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.703-712
    • /
    • 2019
  • Single image Super-Resolution (SISR) aims to generate a visually pleasing high-resolution image from its degraded low-resolution measurement. In recent years, deep learning - based super - resolution methods have been actively researched and have shown more reliable and high performance. A typical method is WaveletSRNet, which restores high-resolution images through wavelet coefficient learning based on feature maps of images. However, there are two disadvantages in WaveletSRNet. One is a big processing time due to the complexity of the algorithm. The other is not to utilize feature maps efficiently when extracting input image's features. To improve this problems, we propose an efficient single image super resolution method, named RDB-WaveletSRNet. The proposed method uses the residual dense block to effectively extract low-resolution feature maps to improve single image super-resolution performance. We also adjust appropriated growth rates to solve complex computational problems. In addition, wavelet packet decomposition is used to obtain the wavelet coefficients according to the possibility of large scale ratio. In the experimental result on various images, we have proven that the proposed method has faster processing time and better image quality than the conventional methods. Experimental results have shown that the proposed method has better image quality by increasing 0.1813dB of PSNR and 1.17 times faster than the conventional method.

A Case Study on The Data Processing and Interpretation of Aeromagnetic Survey Conducted in The Low Latitude Area: Stung Treng, Cambodia (저위도 캄보디아 스퉁트렝 지역의 항공자력탐사 자료처리 및 해석)

  • Shin, Eun-Ju;Ko, Kwang-Beom;You, Young-June;Jung, Yeon-Ho
    • Geophysics and Geophysical Exploration
    • /
    • v.15 no.3
    • /
    • pp.136-143
    • /
    • 2012
  • In this case study, we present the various and consistent processing techniques for the reasonable interpretation of aeromagnetic data. In the processing stage, we especially focused on the three major respects. First, in the low latitude area, severe artifacts are occurred as a result of reduction to the pole technique. To overcome this problem, variable alternative methods were investigated. From the comparison of each technique, we concluded that energy balancing method gives more fruitful result. Second, because of limited a priori information, it is nearly impossible to employ detailed geological survey due to wide and thick spreading of soils in the survey area. So we especially investigated the new techniques such as extracting slope, curvature and aspect information mainly used in GIS field as well as conventional methods. Finally, by using the Euler deconvolution, we extracted the depth information on the magnetic anomalous body. From the synthetic analysis between depth information and previous discussed results, the detailed future survey area was proposed. We think that a series of processing techniques discussed in this study may perform an important role in the domestic and abroad resource development project as a useful guideline.

Sampling and Extraction Method for Environmental DNA (eDNA) in Freshwater Ecosystems (수생태계의 환경유전자(environmental DNA: eDNA) 채집 및 추출기술)

  • Kim, Keonhee;Ryu, Jeha;Hwang, Soon-jin
    • Korean Journal of Ecology and Environment
    • /
    • v.54 no.3
    • /
    • pp.170-189
    • /
    • 2021
  • Environmental DNA (eDNA) is a genetic material derived from organisms in various environments (water, soil, and air). eDNA has many advantages, such as high sensitivity, short investigation time, investigation safety, and accurate species identification. For this reason, it is used in various fields, such as biological monitoring and searching for harmful and endangered organisms. To collect eDNA from a freshwater ecosystem, it is necessary to consider the target organism and gene and a wide variety of items, such as on-site filtration and eDNA preservation methods. In particular, the method of collecting eDNA from the environment is directly related to the eDNA concentration, and when collecting eDNA using an appropriate collection method, accurate (good quality) analysis results can be obtained. In addition, in preserving and extracting eDNA collected from the freshwater ecosystem, when an accurate method is used, the concentration of eDNA distributed in the field can be accurately analyzed. Therefore, for researchers at the initial stage of eDNA research, the eDNA technology poses a difficult barrier to overcome. Thus, basic knowledge of eDNA surveys is necessary. In this study, we introduced sampling of eDNA and transport of sampled eDNA in aquatic ecosystems and extraction methods for eDNA in the laboratory. In addition, we introduced simpler and more efficient eDNA collection tools. On this basis, we hope that the eDNA technique could be more widely used to study aquatic ecosystems and help researchers who are starting to use the eDNA technique.

Recognition of Flat Type Signboard using Deep Learning (딥러닝을 이용한 판류형 간판의 인식)

  • Kwon, Sang Il;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.37 no.4
    • /
    • pp.219-231
    • /
    • 2019
  • The specifications of signboards are set for each type of signboards, but the shape and size of the signboard actually installed are not uniform. In addition, because the colors of the signboard are not defined, so various colors are applied to the signboard. Methods for recognizing signboards can be thought of as similar methods of recognizing road signs and license plates, but due to the nature of the signboards, there are limitations in that the signboards can not be recognized in a way similar to road signs and license plates. In this study, we proposed a methodology for recognizing plate-type signboards, which are the main targets of illegal and old signboards, and automatically extracting areas of signboards, using the deep learning-based Faster R-CNN algorithm. The process of recognizing flat type signboards through signboard images captured by using smartphone cameras is divided into two sequences. First, the type of signboard was recognized using deep learning to recognize flat type signboards in various types of signboard images, and the result showed an accuracy of about 71%. Next, when the boundary recognition algorithm for the signboards was applied to recognize the boundary area of the flat type signboard, the boundary of flat type signboard was recognized with an accuracy of 85%.

Construction of Logic Trees and Hazard Curves for Probabilistic Tsunami Hazard Analysis (확률론적 지진해일 재해도평가를 위한 로직트리 작성 및 재해곡선 산출 방법)

  • Jho, Myeong Hwan;Kim, Gun Hyeong;Yoon, Sung Bum
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.31 no.2
    • /
    • pp.62-72
    • /
    • 2019
  • Due to the difficulties in forecasting the intensity and the source location of tsunami the countermeasures prepared based on the deterministic approach fail to work properly. Thus, there is an increasing demand of the tsunami hazard analyses that consider the uncertainties of tsunami behavior in probabilistic approach. In this paper a fundamental study is conducted to perform the probabilistic tsunami hazard analysis (PTHA) for the tsunamis that caused the disaster to the east coast of Korea. A logic tree approach is employed to consider the uncertainties of the initial free surface displacement and the tsunami height distribution along the coast. The branches of the logic tree are constructed by reflecting characteristics of tsunamis that have attacked the east coast of Korea. The computational time is nonlinearly increasing if the number of branches increases in the process of extracting the fractile curves. Thus, an improved method valid even for the case of a huge number of branches is proposed to save the computational time. The performance of the discrete weight distribution method proposed first in this study is compared with those of the conventional sorting method and the Monte Carlo method. The present method is comparable to the conventional methods in its accuracy, and is efficient in the sense of computational time when compared with the conventional sorting method. The Monte Carlo method, however, is more efficient than the other two methods if the number of branches and the number of fault segments increase significantly.

Efficacy of glycine powder air-polishing in supportive periodontal therapy: a systematic review and meta-analysis

  • Zhu, Mengyuan;Zhao, Meilin;Hu, Bo;Wang, Yunji;Li, Yao;Song, Jinlin
    • Journal of Periodontal and Implant Science
    • /
    • v.51 no.3
    • /
    • pp.147-162
    • /
    • 2021
  • Purpose: This systematic review and meta-analysis was conducted to assess the effects of glycine powder air-polishing (GPAP) in patients during supportive periodontal therapy (SPT) compared to hand instrumentation and ultrasonic scaling. Methods: The authors searched for randomized clinical trials in 8 electronic databases for relevant studies through November 15, 2019. The eligibility criteria were as follows: population, patients with chronic periodontitis undergoing SPT; intervention and comparison, patients treated by GPAP with a standard/nozzle type jet or mechanical instrumentation; and outcomes, bleeding on probing (BOP), patient discomfort/pain (assessed by a visual analogue scale [VAS]), probing depth (PD), gingival recession (Rec), plaque index (PI), clinical attachment level (CAL), gingival epithelium score, and subgingival bacteria count. After extracting the data and assessing the risk of bias, the authors performed the meta-analysis. Results: In total, 17 studies were included in this study. The difference of means for BOP in patients who received GPAP was lower (difference of means: -8.02%; 95% confidence interval [CI], -12.10% to -3.95%; P<0.00001; I2=10%) than that in patients treated with hand instrumentation. The results of patient discomfort/pain measured by a VAS (difference of means: -1.48, 95% CI, -1.90 to -1.06; P<0.001; I2=83%) indicated that treatment with GPAP might be less painful than ultrasonic scaling. The results of PD, Rec, PI, and CAL showed that GPAP had no advantage over hand instrumentation or ultrasonic scaling. Conclusions: The findings of this study suggest that GPAP may alleviate gingival inflammation more effectively and be less painful than traditional methods, which makes it a promising alternative for dental clinical use. With regards to PD, Rec, PI, and CAL, there was insufficient evidence to support a difference among GPAP, hand instrumentation, and ultrasonic scaling. Higher-quality studies are still needed to assess the effects of GPAP.

Makeup transfer by applying a loss function based on facial segmentation combining edge with color information (에지와 컬러 정보를 결합한 안면 분할 기반의 손실 함수를 적용한 메이크업 변환)

  • Lim, So-hyun;Chun, Jun-chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.4
    • /
    • pp.35-43
    • /
    • 2022
  • Makeup is the most common way to improve a person's appearance. However, since makeup styles are very diverse, there are many time and cost problems for an individual to apply makeup directly to himself/herself.. Accordingly, the need for makeup automation is increasing. Makeup transfer is being studied for makeup automation. Makeup transfer is a field of applying makeup style to a face image without makeup. Makeup transfer can be divided into a traditional image processing-based method and a deep learning-based method. In particular, in deep learning-based methods, many studies based on Generative Adversarial Networks have been performed. However, both methods have disadvantages in that the resulting image is unnatural, the result of makeup conversion is not clear, and it is smeared or heavily influenced by the makeup style face image. In order to express the clear boundary of makeup and to alleviate the influence of makeup style facial images, this study divides the makeup area and calculates the loss function using HoG (Histogram of Gradient). HoG is a method of extracting image features through the size and directionality of edges present in the image. Through this, we propose a makeup transfer network that performs robust learning on edges.By comparing the image generated through the proposed model with the image generated through BeautyGAN used as the base model, it was confirmed that the performance of the model proposed in this study was superior, and the method of using facial information that can be additionally presented as a future study.

Mathematical Algorithms for the Automatic Generation of Production Data of Free-Form Concrete Panels (비정형 콘크리트 패널의 생산데이터 자동생성을 위한 수학적 알고리즘)

  • Kim, Doyeong;Kim, Sunkuk;Son, Seunghyun
    • Journal of the Korea Institute of Building Construction
    • /
    • v.22 no.6
    • /
    • pp.565-575
    • /
    • 2022
  • Thanks to the latest developments in digital architectural technologies, free-form designs that maximize the creativity of architects have rapidly increased. However, there are a lot of difficulties in forming various free-form curved surfaces. In panelizing to produce free forms, the methods of mesh, developable surface, tessellation and subdivision are applied. The process of applying such panelizing methods when producing free-form panels is complex, time-consuming and requires a vast amount of manpower when extracting production data. Therefore, algorithms are needed to quickly and systematically extract production data that are needed for panel production after a free-form building is designed. In this respect, the purpose of this study is to propose mathematical algorithms for the automatic generation of production data of free-form panels in consideration of the building model, performance of production equipment and pattern information. To accomplish this, mathematical algorithms were suggested upon panelizing, and production data for a CNC machine were extracted by mapping as free-form curved surfaces. The study's findings may contribute to improved productivity and reduced cost by realizing the automatic generation of data for production of free-form concrete panels.

Intrusion Detection Method Using Unsupervised Learning-Based Embedding and Autoencoder (비지도 학습 기반의 임베딩과 오토인코더를 사용한 침입 탐지 방법)

  • Junwoo Lee;Kangseok Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.355-364
    • /
    • 2023
  • As advanced cyber threats continue to increase in recent years, it is difficult to detect new types of cyber attacks with existing pattern or signature-based intrusion detection method. Therefore, research on anomaly detection methods using data learning-based artificial intelligence technology is increasing. In addition, supervised learning-based anomaly detection methods are difficult to use in real environments because they require sufficient labeled data for learning. Research on an unsupervised learning-based method that learns from normal data and detects an anomaly by finding a pattern in the data itself has been actively conducted. Therefore, this study aims to extract a latent vector that preserves useful sequence information from sequence log data and develop an anomaly detection learning model using the extracted latent vector. Word2Vec was used to create a dense vector representation corresponding to the characteristics of each sequence, and an unsupervised autoencoder was developed to extract latent vectors from sequence data expressed as dense vectors. The developed autoencoder model is a recurrent neural network GRU (Gated Recurrent Unit) based denoising autoencoder suitable for sequence data, a one-dimensional convolutional neural network-based autoencoder to solve the limited short-term memory problem that GRU can have, and an autoencoder combining GRU and one-dimensional convolution was used. The data used in the experiment is time-series-based NGIDS (Next Generation IDS Dataset) data, and as a result of the experiment, an autoencoder that combines GRU and one-dimensional convolution is better than a model using a GRU-based autoencoder or a one-dimensional convolution-based autoencoder. It was efficient in terms of learning time for extracting useful latent patterns from training data, and showed stable performance with smaller fluctuations in anomaly detection performance.