• Title/Summary/Keyword: Search Speed

Search Result 795, Processing Time 0.024 seconds

Development and Validation of the GPU-based 3D Dynamic Analysis Code for Simulating Rock Fracturing Subjected to Impact Loading (충격 하중 시 암석의 파괴거동해석을 위한 GPGPU 기반 3차원 동적해석기법의 개발과 검증 연구)

  • Min, Gyeong-Jo;Fukuda, Daisuke;Oh, Se-Wook;Cho, Sang-Ho
    • Explosives and Blasting
    • /
    • v.39 no.2
    • /
    • pp.1-14
    • /
    • 2021
  • Recently, with the development of high-performance processing devices such as GPGPU, a three-dimensional dynamic analysis technique that can replace expensive rock material impact tests has been actively developed in the defense and aerospace fields. Experimentally observing or measuring fracture processes occurring in rocks subjected to high impact loads, such as blasting and earth penetration of small-diameter missiles, are difficult due to the inhomogeneity and opacity of rock materials. In this study, a three-dimensional dynamic fracture process analysis technique (3D-DFPA) was developed to simulate the fracture behavior of rocks due to impact. In order to improve the operation speed, an algorithm capable of GPGPU operation was developed for explicit analysis and contact element search. To verify the proposed dynamic fracture process analysis technique, the dynamic fracture toughness tests of the Straight Notched Disk Bending (SNDB) limestone samples were simulated and the propagation of the reflection and transmission of the stress waves at the rock-impact bar interfaces and the fracture process of the rock samples were compared. The dynamic load tests for the SNDB sample applied a Pulse Shape controlled Split Hopkinson presure bar (PS-SHPB) that can control the waveform of the incident stress wave, the stress state, and the fracture process of the rock models were analyzed with experimental results.

A Study on the Factors Affecting the Air Environment in Chungnam Province - Focusing on Cheonan, Dangjin, and Seosan (충남 대기환경 영향요인에 관한 연구 - 천안, 당진, 서산 등을 중심으로)

  • Hwang, Kyu-Won;Kim, Jinyoung;Kwon, Young-Ju
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.118-127
    • /
    • 2021
  • Recently, the public's interest in the air environment has increased, and public health is threatened by fine particulate matter. Furthermore, the government continues efforts to improve air quality by expanding the monitoring of air pollutants and reinforcing environmental standards. Since air quality differs depending on the region in the Korean Peninsula, it is currently necessary to identify the cause and search for influencing factors. In this study, the atmospheric environment and regional differences in cities located in the Chungnam Province were observed. As a research method, regression analysis was performed for weather conditions, such as temperature, wind speed, precipitation, and season and targeted at air pollutants, such as SO2, NO2, CO, O3, PM10, and PM2.5, as well as heavy metals contained in particulate matter, such as Pb, Cd, Cr, Cu, Ni, As, Mn, Fe, Al, Ca, and Mg. In the case of PM10, the concentrations of Mn(0.4884) in Cheonan, CO(0.3329) in Dangjin, and Mg(0.5691) in Seosan were highest. In the case of PM2.5, Cheonan NO2(0.4759), Dangjin CO(0.4128), and Seosan NO2(0.3715) were significantly affected. In summary, the influencing factors vary according to the region in Chungnam province in terms of air quality, and there is a difference in the degree of contribution. Therefore, it is considered that the Korean government's management of air quality is required for each region.

A Study on Updated Object Detection and Extraction of Underground Information (지하정보 변화객체 탐지 및 추출 연구)

  • Kim, Kwangsoo;Lee, Heyung-Sub;Kim, Juwan
    • Journal of Software Assessment and Valuation
    • /
    • v.16 no.2
    • /
    • pp.99-107
    • /
    • 2020
  • An underground integrated map is being built for underground safety management and is being updated periodically. The map update proceeds with the procedure of deleting all previously stored objects and saving newly entered objects. However, even unchanged objects are repeatedly stored, deleted, and stored. That causes the delay of the update time. In this study, in order to shorten the update time of the integrated map, an updated object and an unupdated object are separated, and only updated objects are reflected in the underground integrated map, and a system implementing this technology is described. For the updated object, an object comparison method using the center point of the object is used, and a quad tree is used to improve the search speed. The types of updated objects are classified into addition and deletion using the shape of the object, and change using its attributes. The proposed system consists of update object detection, extraction, conversion, storage, and history management modules. This system has the advantage of being able to update the integrated map about four times faster than the existing method based on the data used in the experiment, and has the advantage that it can be applied to both ground and underground facilities.

Trends in the Use of Artificial Intelligence in Medical Image Analysis (의료영상 분석에서 인공지능 이용 동향)

  • Lee, Gil-Jae;Lee, Tae-Soo
    • Journal of the Korean Society of Radiology
    • /
    • v.16 no.4
    • /
    • pp.453-462
    • /
    • 2022
  • In this paper, the artificial intelligence (AI) technology used in the medical image analysis field was analyzed through a literature review. Literature searches were conducted on PubMed, ResearchGate, Google and Cochrane Review using the key word. Through literature search, 114 abstracts were searched, and 98 abstracts were reviewed, excluding 16 duplicates. In the reviewed literature, AI is applied in classification, localization, disease detection, disease segmentation, and fit degree of registration images. In machine learning (ML), prior feature extraction and inputting the extracted feature values into the neural network have disappeared. Instead, it appears that the neural network is changing to a deep learning (DL) method with multiple hidden layers. The reason is thought to be that feature extraction is processed in the DL process due to the increase in the amount of memory of the computer, the improvement of the calculation speed, and the construction of big data. In order to apply the analysis of medical images using AI to medical care, the role of physicians is important. Physicians must be able to interpret and analyze the predictions of AI algorithms. Additional medical education and professional development for existing physicians is needed to understand AI. Also, it seems that a revised curriculum for learners in medical school is needed.

Lane Change Methodology for Autonomous Vehicles Based on Deep Reinforcement Learning (심층강화학습 기반 자율주행차량의 차로변경 방법론)

  • DaYoon Park;SangHoon Bae;Trinh Tuan Hung;Boogi Park;Bokyung Jung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.1
    • /
    • pp.276-290
    • /
    • 2023
  • Several efforts in Korea are currently underway with the goal of commercializing autonomous vehicles. Hence, various studies are emerging on autonomous vehicles that drive safely and quickly according to operating guidelines. The current study examines the path search of an autonomous vehicle from a microscopic viewpoint and tries to prove the efficiency required by learning the lane change of an autonomous vehicle through Deep Q-Learning. A SUMO was used to achieve this purpose. The scenario was set to start with a random lane at the starting point and make a right turn through a lane change to the third lane at the destination. As a result of the study, the analysis was divided into simulation-based lane change and simulation-based lane change applied with Deep Q-Learning. The average traffic speed was improved by about 40% in the case of simulation with Deep Q-Learning applied, compared to the case without application, and the average waiting time was reduced by about 2 seconds and the average queue length by about 2.3 vehicles.

Federated learning-based client training acceleration method for personalized digital twins (개인화 디지털 트윈을 위한 연합학습 기반 클라이언트 훈련 가속 방식)

  • YoungHwan Jeong;Won-gi Choi;Hyoseon Kye;JeeHyeong Kim;Min-hwan Song;Sang-shin Lee
    • Journal of Internet Computing and Services
    • /
    • v.25 no.4
    • /
    • pp.23-37
    • /
    • 2024
  • Digital twin is an M&S (Modeling and Simulation) technology designed to solve or optimize problems in the real world by replicating physical objects in the real world as virtual objects in the digital world and predicting phenomena that may occur in the future through simulation. Digital twins have been elaborately designed and utilized based on data collected to achieve specific purposes in large-scale environments such as cities and industrial facilities. In order to apply this digital twin technology to real life and expand it into user-customized service technology, practical but sensitive issues such as personal information protection and personalization of simulations must be resolved. To solve this problem, this paper proposes a federated learning-based accelerated client training method (FACTS) for personalized digital twins. The basic approach is to use a cluster-driven federated learning training procedure to protect personal information while simultaneously selecting a training model similar to the user and training it adaptively. As a result of experiments under various statistically heterogeneous conditions, FACTS was found to be superior to the existing FL method in terms of training speed and resource efficiency.

Comparison of Deep Learning Frameworks: About Theano, Tensorflow, and Cognitive Toolkit (딥러닝 프레임워크의 비교: 티아노, 텐서플로, CNTK를 중심으로)

  • Chung, Yeojin;Ahn, SungMahn;Yang, Jiheon;Lee, Jaejoon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.1-17
    • /
    • 2017
  • The deep learning framework is software designed to help develop deep learning models. Some of its important functions include "automatic differentiation" and "utilization of GPU". The list of popular deep learning framework includes Caffe (BVLC) and Theano (University of Montreal). And recently, Microsoft's deep learning framework, Microsoft Cognitive Toolkit, was released as open-source license, following Google's Tensorflow a year earlier. The early deep learning frameworks have been developed mainly for research at universities. Beginning with the inception of Tensorflow, however, it seems that companies such as Microsoft and Facebook have started to join the competition of framework development. Given the trend, Google and other companies are expected to continue investing in the deep learning framework to bring forward the initiative in the artificial intelligence business. From this point of view, we think it is a good time to compare some of deep learning frameworks. So we compare three deep learning frameworks which can be used as a Python library. Those are Google's Tensorflow, Microsoft's CNTK, and Theano which is sort of a predecessor of the preceding two. The most common and important function of deep learning frameworks is the ability to perform automatic differentiation. Basically all the mathematical expressions of deep learning models can be represented as computational graphs, which consist of nodes and edges. Partial derivatives on each edge of a computational graph can then be obtained. With the partial derivatives, we can let software compute differentiation of any node with respect to any variable by utilizing chain rule of Calculus. First of all, the convenience of coding is in the order of CNTK, Tensorflow, and Theano. The criterion is simply based on the lengths of the codes and the learning curve and the ease of coding are not the main concern. According to the criteria, Theano was the most difficult to implement with, and CNTK and Tensorflow were somewhat easier. With Tensorflow, we need to define weight variables and biases explicitly. The reason that CNTK and Tensorflow are easier to implement with is that those frameworks provide us with more abstraction than Theano. We, however, need to mention that low-level coding is not always bad. It gives us flexibility of coding. With the low-level coding such as in Theano, we can implement and test any new deep learning models or any new search methods that we can think of. The assessment of the execution speed of each framework is that there is not meaningful difference. According to the experiment, execution speeds of Theano and Tensorflow are very similar, although the experiment was limited to a CNN model. In the case of CNTK, the experimental environment was not maintained as the same. The code written in CNTK has to be run in PC environment without GPU where codes execute as much as 50 times slower than with GPU. But we concluded that the difference of execution speed was within the range of variation caused by the different hardware setup. In this study, we compared three types of deep learning framework: Theano, Tensorflow, and CNTK. According to Wikipedia, there are 12 available deep learning frameworks. And 15 different attributes differentiate each framework. Some of the important attributes would include interface language (Python, C ++, Java, etc.) and the availability of libraries on various deep learning models such as CNN, RNN, DBN, and etc. And if a user implements a large scale deep learning model, it will also be important to support multiple GPU or multiple servers. Also, if you are learning the deep learning model, it would also be important if there are enough examples and references.

Analysis and Evaluation of Frequent Pattern Mining Technique based on Landmark Window (랜드마크 윈도우 기반의 빈발 패턴 마이닝 기법의 분석 및 성능평가)

  • Pyun, Gwangbum;Yun, Unil
    • Journal of Internet Computing and Services
    • /
    • v.15 no.3
    • /
    • pp.101-107
    • /
    • 2014
  • With the development of online service, recent forms of databases have been changed from static database structures to dynamic stream database structures. Previous data mining techniques have been used as tools of decision making such as establishment of marketing strategies and DNA analyses. However, the capability to analyze real-time data more quickly is necessary in the recent interesting areas such as sensor network, robotics, and artificial intelligence. Landmark window-based frequent pattern mining, one of the stream mining approaches, performs mining operations with respect to parts of databases or each transaction of them, instead of all the data. In this paper, we analyze and evaluate the techniques of the well-known landmark window-based frequent pattern mining algorithms, called Lossy counting and hMiner. When Lossy counting mines frequent patterns from a set of new transactions, it performs union operations between the previous and current mining results. hMiner, which is a state-of-the-art algorithm based on the landmark window model, conducts mining operations whenever a new transaction occurs. Since hMiner extracts frequent patterns as soon as a new transaction is entered, we can obtain the latest mining results reflecting real-time information. For this reason, such algorithms are also called online mining approaches. We evaluate and compare the performance of the primitive algorithm, Lossy counting and the latest one, hMiner. As the criteria of our performance analysis, we first consider algorithms' total runtime and average processing time per transaction. In addition, to compare the efficiency of storage structures between them, their maximum memory usage is also evaluated. Lastly, we show how stably the two algorithms conduct their mining works with respect to the databases that feature gradually increasing items. With respect to the evaluation results of mining time and transaction processing, hMiner has higher speed than that of Lossy counting. Since hMiner stores candidate frequent patterns in a hash method, it can directly access candidate frequent patterns. Meanwhile, Lossy counting stores them in a lattice manner; thus, it has to search for multiple nodes in order to access the candidate frequent patterns. On the other hand, hMiner shows worse performance than that of Lossy counting in terms of maximum memory usage. hMiner should have all of the information for candidate frequent patterns to store them to hash's buckets, while Lossy counting stores them, reducing their information by using the lattice method. Since the storage of Lossy counting can share items concurrently included in multiple patterns, its memory usage is more efficient than that of hMiner. However, hMiner presents better efficiency than that of Lossy counting with respect to scalability evaluation due to the following reasons. If the number of items is increased, shared items are decreased in contrast; thereby, Lossy counting's memory efficiency is weakened. Furthermore, if the number of transactions becomes higher, its pruning effect becomes worse. From the experimental results, we can determine that the landmark window-based frequent pattern mining algorithms are suitable for real-time systems although they require a significant amount of memory. Hence, we need to improve their data structures more efficiently in order to utilize them additionally in resource-constrained environments such as WSN(Wireless sensor network).

The Variation of Scan Time According to Patient's Breast Size and Body Mass Index in Breast Sentinel lymphangiography (유방암의 감시림프절 검사에서 유방크기와 체질량지수에 따른 검사시간 변화)

  • Lee, Da-Young;Nam-Koong, Hyuk;Cho, Seok-Won;Oh, Shin-Hyun;Im, Han-Sang;Kim, Jae-Sam;Lee, Chang-Ho;Park, Hoon-Hee
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.16 no.2
    • /
    • pp.62-67
    • /
    • 2012
  • Purpose : At this time, the sentinel lymph node mapping using radioisotope and blue dye is preceded for breast cancer patient's sentinel lymph node biopsy. But all patients were applied the same protocol without consideration of physical specific character like the breast sizes and body mass indexes. The purpose of this study is search the optimized scan time in breast sentinel lymphangiography by observing how much the body mass index and breast size influence speed of lymphatic flow. Materials and Methods : The Object of this study was 100 breast cancer patients(Female, 100 persons, average age $50.34{\pm}10.26$ years old)at Severance hospital from October 2011 to December 2011. They were scanned breast sentinel lymphangiography before operation. This study was performed on Forte dual heads gamma camera (Philips Medical Systems, Nederland B.V.). All patients were intra-dermal injected $^{99m}Tc$-Phytate 18.5 MBq, 0.5 ml. For 80 patients, we have scanned without limitation of scan time until the lymphatic flow from the lymph node since injection. We measured how long the lymphatic flow time between departures from injects site and arrival to lymph node using stopwatch. After we calculated patient's Body mass Index and classified as 4 groups. And we measured patient's breast size and classified 3 groups. The modified breast lymphangiography that changing scan time according to comparison study's result was performed on 20 patients and was estimated. Results : The mean scan time as breast size was A group 2.48 minutes, B group 7.69 minutes, C group 10.43 minutes. The mean scan time as body mass index was under weight 1.35 minutes, normal weight 2.56 minutes, slightly over 5.62 minutes, over weighted 5.62 minutes. The success rate of modified breast lymphangiography was 85%. Conclusion : As the Body mass index became higher and breast size became bigger, the total scan time is increased. Based on the obtained information, we designed modified breast lymphangiography protocol. At the cases applying that protocol, most of sentinel lymph nodes were visualized as lymphatic pool. In conclusion, we found that the more success rate in modified protocol considering physical individuality than study carrying out in the same protocol.

  • PDF

Noise-robust electrocardiogram R-peak detection with adaptive filter and variable threshold (적응형 필터와 가변 임계값을 적용하여 잡음에 강인한 심전도 R-피크 검출)

  • Rahman, MD Saifur;Choi, Chul-Hyung;Kim, Si-Kyung;Park, In-Deok;Kim, Young-Pil
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.18 no.12
    • /
    • pp.126-134
    • /
    • 2017
  • There have been numerous studies on extracting the R-peak from electrocardiogram (ECG) signals. However, most of the detection methods are complicated to implement in a real-time portable electrocardiograph device and have the disadvantage of requiring a large amount of calculations. R-peak detection requires pre-processing and post-processing related to baseline drift and the removal of noise from the commercial power supply for ECG data. An adaptive filter technique is widely used for R-peak detection, but the R-peak value cannot be detected when the input is lower than a threshold value. Moreover, there is a problem in detecting the P-peak and T-peak values due to the derivation of an erroneous threshold value as a result of noise. We propose a robust R-peak detection algorithm with low complexity and simple computation to solve these problems. The proposed scheme removes the baseline drift in ECG signals using an adaptive filter to solve the problems involved in threshold extraction. We also propose a technique to extract the appropriate threshold value automatically using the minimum and maximum values of the filtered ECG signal. To detect the R-peak from the ECG signal, we propose a threshold neighborhood search technique. Through experiments, we confirmed the improvement of the R-peak detection accuracy of the proposed method and achieved a detection speed that is suitable for a mobile system by reducing the amount of calculation. The experimental results show that the heart rate detection accuracy and sensitivity were very high (about 100%).