• Title/Summary/Keyword: deep machine learning

Search Result 1,085, Processing Time 0.024 seconds

A Review of Deep Learning-based Trace Interpolation and Extrapolation Techniques for Reconstructing Missing Near Offset Data (가까운 벌림 빠짐 해결을 위한 딥러닝 기반의 트레이스 내삽 및 외삽 기술에 대한 고찰)

  • Jiho Park;Soon Jee Seol;Joongmoo Byun
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.4
    • /
    • pp.185-198
    • /
    • 2023
  • In marine seismic surveys, the inevitable occurrence of trace gaps in the near offset resulting from geometrical differences between sources and receivers adversely affects subsequent seismic data processing and imaging. The absence of data in the near-offset region hinders accurate seismic imaging. Therefore, reconstructing the missing near-offset information is crucial for mitigating the influence of seismic multiples, particularly in the case of offshore surveys where the impact of multiple reflections is relatively more pronounced. Conventionally, various interpolation methods based on the Radon transform have been proposed to address the issue of the nearoffset data gap. However, these methods have several limitations, leading to the recent emergence of deep-learning (DL)-based approaches as alternatives. In this study, we conducted an in-depth analysis of two representative DL-based studies to scrutinize the challenges that future studies on near-offset interpolation must address. Furthermore, through field data experiments, we precisely analyze the limitations encountered when applying previous DL-based trace interpolation techniques to near-offset situations. Consequently, we suggest that near-offset data gaps must be approached by extrapolation rather than interpolation.

Deep Learning Approach for Automatic Discontinuity Mapping on 3D Model of Tunnel Face (터널 막장 3차원 지형모델 상에서의 불연속면 자동 매핑을 위한 딥러닝 기법 적용 방안)

  • Chuyen Pham;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.508-518
    • /
    • 2023
  • This paper presents a new approach for the automatic mapping of discontinuities in a tunnel face based on its 3D digital model reconstructed by LiDAR scan or photogrammetry techniques. The main idea revolves around the identification of discontinuity areas in the 3D digital model of a tunnel face by segmenting its 2D projected images using a deep-learning semantic segmentation model called U-Net. The proposed deep learning model integrates various features including the projected RGB image, depth map image, and local surface properties-based images i.e., normal vector and curvature images to effectively segment areas of discontinuity in the images. Subsequently, the segmentation results are projected back onto the 3D model using depth maps and projection matrices to obtain an accurate representation of the location and extent of discontinuities within the 3D space. The performance of the segmentation model is evaluated by comparing the segmented results with their corresponding ground truths, which demonstrates the high accuracy of segmentation results with the intersection-over-union metric of approximately 0.8. Despite still being limited in training data, this method exhibits promising potential to address the limitations of conventional approaches, which only rely on normal vectors and unsupervised machine learning algorithms for grouping points in the 3D model into distinct sets of discontinuities.

Utilizing deep learning algorithm and high-resolution precipitation product to predict water level variability (고해상도 강우자료와 딥러닝 알고리즘을 활용한 수위 변동성 예측)

  • Han, Heechan;Kang, Narae;Yoon, Jungsoo;Hwang, Seokhwan
    • Journal of Korea Water Resources Association
    • /
    • v.57 no.7
    • /
    • pp.471-479
    • /
    • 2024
  • Flood damage is becoming more serious due to the heavy rainfall caused by climate change. Physically based hydrological models have been utilized to predict stream water level variability and provide flood forecasting. Recently, hydrological simulations using machine learning and deep learning algorithms based on nonlinear relationships between hydrological data have been getting attention. In this study, the Long Short-Term Memory (LSTM) algorithm is used to predict the water level of the Seomjin River watershed. In addition, Climate Prediction Center morphing method (CMORPH)-based gridded precipitation data is applied as input data for the algorithm to overcome for the limitations of ground data. The water level prediction results of the LSTM algorithm coupling with the CMORPH data showed that the mean CC was 0.98, RMSE was 0.07 m, and NSE was 0.97. It is expected that deep learning and remote data can be used together to overcome for the shortcomings of ground observation data and to obtain reliable prediction results.

Collision Avoidance Sensor System for Mobile Crane (전지형 크레인의 인양물 충돌방지를 위한 환경탐지 센서 시스템 개발)

  • Kim, Ji-Chul;Kim, Young Jea;Kim, Mingeuk;Lee, Hanmin
    • Journal of Drive and Control
    • /
    • v.19 no.4
    • /
    • pp.62-69
    • /
    • 2022
  • Construction machinery is exposed to accidents such as collisions, narrowness, and overturns during operation. In particular, mobile crane is operated only with the driver's vision and limited information of the assistant worker. Thus, there is a high risk of an accident. Recently, some collision avoidance device using sensors such as cameras and LiDAR have been applied. However, they are still insufficient to prevent collisions in the omnidirectional 3D space. In this study, a rotating LiDAR device was developed and applied to a 250-ton crane to obtain a full-space point cloud. An algorithm that could provide distance information and safety status to the driver was developed. Also, deep-learning segmentation algorithm was used to classify human-worker. The developed device could recognize obstacles within 100m of a 360-degree range. In the experiment, a safety distance was calculated with an error of 10.3cm at 30m to give the operator an accurate distance and collision alarm.

Development of PCB Classification System Using Robot Arm and Machine Vision (로봇암과 머신비전을 이용한 기판분류 시스템 개발)

  • Yun, Tae-Jin;Yeo, Jeong-Hun;Kim, Hyun-Su;Park, Seung-Ryeol;Hwang, Seung-Hyeok
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2020.01a
    • /
    • pp.145-146
    • /
    • 2020
  • 현재 4차 산업 혁명 시대에서 가장 중요한 화두는 빅데이터(Big Data), 인공지능이며, 이를 이용한 분야로 생산, 제조 분야에서도 인공지능 영상 인식 기술을 활용한 생산품을 자동으로 분류하고 나아가 품질검사도 할 수 있도록 개발하고 있다. 또한, 로봇을 공장의 생산라인에 운영하여 노동력 감소에 따른 보완이 되고, 제조과정의 효율성 증가와 생산시간 감소로 생산성을 높일 수 있다. 이를 위해 본 논문에서는 실시간 객체감지 기술인 YOLO-v3 알고리즘을 이용해서 PCB보드 인식, 분류할 수 있는 시스템을 개발하였다.

  • PDF

Energy-Aware Data-Preprocessing Scheme for Efficient Audio Deep Learning in Solar-Powered IoT Edge Computing Environments (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 에너지 적응형 데이터 전처리 기법)

  • Yeontae Yoo;Dong Kun Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.159-164
    • /
    • 2023
  • Solar energy harvesting IoT devices prioritize maximizing the utilization of collected energy due to the periodic recharging nature of solar energy, rather than minimizing energy consumption. Meanwhile, research on edge AI, which performs machine learning near the data source instead of the cloud, is actively conducted for reasons such as data confidentiality and privacy, response time, and cost. One such research area involves performing various audio AI applications using audio data collected from multiple IoT devices in an IoT edge computing environment. However, in most studies, IoT devices only perform sensing data transmission to the edge server, and all processes, including data preprocessing, are performed on the edge server. In this case, it not only leads to overload issues on the edge server but also causes network congestion by transmitting unnecessary data for learning. On the other way, if data preprocessing is delegated to each IoT device to address this issue, it leads to another problem of increased blackout time due to energy shortages in the devices. In this paper, we aim to alleviate the problem of increased blackout time in devices while mitigating issues in server-centric edge AI environments by determining where the data preprocessed based on the energy state of each IoT device. In the proposed method, IoT devices only perform the preprocessing process, which includes sound discrimination and noise removal, and transmit to the server if there is more energy available than the energy threshold required for the basic operation of the device.

Implementation and Analysis of Power Analysis Attack Using Multi-Layer Perceptron Method (Multi-Layer Perceptron 기법을 이용한 전력 분석 공격 구현 및 분석)

  • Kwon, Hongpil;Bae, DaeHyeon;Ha, Jaecheol
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.5
    • /
    • pp.997-1006
    • /
    • 2019
  • To overcome the difficulties and inefficiencies of the existing power analysis attack, we try to extract the secret key embedded in a cryptographic device using attack model based on MLP(Multi-Layer Perceptron) method. The target of our proposed power analysis attack is the AES-128 encryption module implemented on an 8-bit processor XMEGA128. We use the divide-and-conquer method in bytes to recover the whole 16 bytes secret key. As a result, the MLP-based power analysis attack can extract the secret key with the accuracy of 89.51%. Additionally, this MLP model has the 94.51% accuracy when the pre-processing method on power traces is applied. Compared to the machine leaning-based model SVM(Support Vector Machine), we show that the MLP can be a outstanding method in power analysis attacks due to excellent ability for feature extraction.

Grasping a Target Object in Clutter with an Anthropomorphic Robot Hand via RGB-D Vision Intelligence, Target Path Planning and Deep Reinforcement Learning (RGB-D 환경인식 시각 지능, 목표 사물 경로 탐색 및 심층 강화학습에 기반한 사람형 로봇손의 목표 사물 파지)

  • Ryu, Ga Hyeon;Oh, Ji-Heon;Jeong, Jin Gyun;Jung, Hwanseok;Lee, Jin Hyuk;Lopez, Patricio Rivera;Kim, Tae-Seong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.9
    • /
    • pp.363-370
    • /
    • 2022
  • Grasping a target object among clutter objects without collision requires machine intelligence. Machine intelligence includes environment recognition, target & obstacle recognition, collision-free path planning, and object grasping intelligence of robot hands. In this work, we implement such system in simulation and hardware to grasp a target object without collision. We use a RGB-D image sensor to recognize the environment and objects. Various path-finding algorithms been implemented and tested to find collision-free paths. Finally for an anthropomorphic robot hand, object grasping intelligence is learned through deep reinforcement learning. In our simulation environment, grasping a target out of five clutter objects, showed an average success rate of 78.8%and a collision rate of 34% without path planning. Whereas our system combined with path planning showed an average success rate of 94% and an average collision rate of 20%. In our hardware environment grasping a target out of three clutter objects showed an average success rate of 30% and a collision rate of 97% without path planning whereas our system combined with path planning showed an average success rate of 90% and an average collision rate of 23%. Our results show that grasping a target object in clutter is feasible with vision intelligence, path planning, and deep RL.

Predicting Success of Crowdfunding Campaigns using Multimedia and Linguistic Features (멀티미디어 및 언어적 특성을 활용한 크라우드펀딩 캠페인의 성공 여부 예측)

  • Lee, Kang-hee;Lee, Seung-hun;Kim, Hyun-chul
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.2
    • /
    • pp.281-288
    • /
    • 2018
  • Crowdfunding has seen an enormous rise, becoming a new alternative funding source for emerging startup companies in recent years. Despite the huge success of crowdfunding, it has been reported that only around 40% of crowdfunding campaigns successfully raise the desired goal amount. The purpose of this study is to investigate key factors influencing successful fundraising on crowdfunding platforms. To this end, we mainly focus on contents of project campaigns, particularly their linguistic cues as well as multiple features extracted from project information and multimedia contents. We reveal which of these features are useful for predicting success of crowdfunding campaigns, and then build a predictive model based on those selected features. Our experimental results demonstrate that the built model predicts the success or failure of a crowdfunding campaign with 86.15% accuracy.

사물인터넷 환경에서의 기계학습

  • Im, Jae-Hyeon;Park, Yun-Gi;Gwon, Jin-Man;Seo, Jeong-Uk
    • Information and Communications Magazine
    • /
    • v.33 no.5
    • /
    • pp.48-54
    • /
    • 2016
  • 우리는 물리적인 현실 세계와 디지털의 가상 세계에서 매일 끊임없이 데이터를 양산해내고 있다. 구글, 아마존, MS, IBM 등의 유수 기업들은 이미 데이터를 수집하고 분석하여 특정 사용자나 불특정 다수에게 다양한 서비스를 제공하면서 새로운 형태의 이윤을 창출하고 있다. 가까운 미래에 사물인터넷(Internet of Things)이 본격적으로 활성화된다면 사람뿐만 아니라 모든 사물들이 인터넷을 통해 데이터를 양산하고 서로 교환하는 그야말로 데이터 빅뱅의 시대가 도래할 것으로 예상된다. 이러한 변혁의 시대에 우리는 사물인터넷을 통해 수집되는 수많은 데이터를 어떻게 활용할 것인지에 대해 진지하게 고민하고 연구할 필요가 있다. 본고에서는 사물인터넷을 통해 수집된 데이터를 효과적으로 활용하기 위해 필요한 핵심기술 중 하나인 기계학습(Machine Learning)에 대해 기본 개념, 종류, 평가방법 등을 설명하고 기계학습 알고리즘 중 딥 러닝(Deep Learning)에 대한 기술 동향을 살펴본 후, 사물인터넷에서 기계학습 프레임워크에 대해 간략히 소개한다.