• Title/Summary/Keyword: Multi Deep Learning Model

Search Result 278, Processing Time 0.022 seconds

Breast Tumor Cell Nuclei Segmentation in Histopathology Images using EfficientUnet++ and Multi-organ Transfer Learning

  • Dinh, Tuan Le;Kwon, Seong-Geun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1000-1011
    • /
    • 2021
  • In recent years, using Deep Learning methods to apply for medical and biomedical image analysis has seen many advancements. In clinical, using Deep Learning-based approaches for cancer image analysis is one of the key applications for cancer detection and treatment. However, the scarcity and shortage of labeling images make the task of cancer detection and analysis difficult to reach high accuracy. In 2015, the Unet model was introduced and gained much attention from researchers in the field. The success of Unet model is the ability to produce high accuracy with very few input images. Since the development of Unet, there are many variants and modifications of Unet related architecture. This paper proposes a new approach of using Unet++ with pretrained EfficientNet as backbone architecture for breast tumor cell nuclei segmentation and uses the multi-organ transfer learning approach to segment nuclei of breast tumor cells. We attempt to experiment and evaluate the performance of the network on the MonuSeg training dataset and Triple Negative Breast Cancer (TNBC) testing dataset, both are Hematoxylin and Eosin (H & E)-stained images. The results have shown that EfficientUnet++ architecture and the multi-organ transfer learning approach had outperformed other techniques and produced notable accuracy for breast tumor cell nuclei segmentation.

What are the benefits and challenges of multi-purpose dam operation modeling via deep learning : A case study of Seomjin River

  • Eun Mi Lee;Jong Hun Kam
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.246-246
    • /
    • 2023
  • Multi-purpose dams are operated accounting for both physical and socioeconomic factors. This study aims to evaluate the utility of a deep learning algorithm-based model for three multi-purpose dam operation (Seomjin River dam, Juam dam, and Juam Control dam) in Seomjin River. In this study, the Gated Recurrent Unit (GRU) algorithm is applied to predict hourly water level of the dam reservoirs over 2002-2021. The hyper-parameters are optimized by the Bayesian optimization algorithm to enhance the prediction skill of the GRU model. The GRU models are set by the following cases: single dam input - single dam output (S-S), multi-dam input - single dam output (M-S), and multi-dam input - multi-dam output (M-M). Results show that the S-S cases with the local dam information have the highest accuracy above 0.8 of NSE. Results from the M-S and M-M model cases confirm that upstream dam information can bring important information for downstream dam operation prediction. The S-S models are simulated with altered outflows (-40% to +40%) to generate the simulated water level of the dam reservoir as alternative dam operational scenarios. The alternative S-S model simulations show physically inconsistent results, indicating that our deep learning algorithm-based model is not explainable for multi-purpose dam operation patterns. To better understand this limitation, we further analyze the relationship between observed water level and outflow of each dam. Results show that complexity in outflow-water level relationship causes the limited predictability of the GRU algorithm-based model. This study highlights the importance of socioeconomic factors from hidden multi-purpose dam operation processes on not only physical processes-based modeling but also aritificial intelligence modeling.

  • PDF

Privacy Preserving Techniques for Deep Learning in Multi-Party System (멀티 파티 시스템에서 딥러닝을 위한 프라이버시 보존 기술)

  • Hye-Kyeong Ko
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.647-654
    • /
    • 2023
  • Deep Learning is a useful method for classifying and recognizing complex data such as images and text, and the accuracy of the deep learning method is the basis for making artificial intelligence-based services on the Internet useful. However, the vast amount of user da vita used for training in deep learning has led to privacy violation problems, and it is worried that companies that have collected personal and sensitive data of users, such as photographs and voices, own the data indefinitely. Users cannot delete their data and cannot limit the purpose of use. For example, data owners such as medical institutions that want to apply deep learning technology to patients' medical records cannot share patient data because of privacy and confidentiality issues, making it difficult to benefit from deep learning technology. In this paper, we have designed a privacy preservation technique-applied deep learning technique that allows multiple workers to use a neural network model jointly, without sharing input datasets, in multi-party system. We proposed a method that can selectively share small subsets using an optimization algorithm based on modified stochastic gradient descent, confirming that it could facilitate training with increased learning accuracy while protecting private information.

Exercise Recommendation System Using Deep Neural Collaborative Filtering (신경망 협업 필터링을 이용한 운동 추천시스템)

  • Jung, Wooyong;Kyeong, Chanuk;Lee, Seongwoo;Kim, Soo-Hyun;Sun, Young-Ghyu;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.22 no.6
    • /
    • pp.173-178
    • /
    • 2022
  • Recently, a recommendation system using deep learning in social network services has been actively studied. However, in the case of a recommendation system using deep learning, the cold start problem and the increased learning time due to the complex computation exist as the disadvantage. In this paper, the user-tailored exercise routine recommendation algorithm is proposed using the user's metadata. Metadata (the user's height, weight, sex, etc.) set as the input of the model is applied to the designed model in the proposed algorithms. The exercise recommendation system model proposed in this paper is designed based on the neural collaborative filtering (NCF) algorithm using multi-layer perceptron and matrix factorization algorithm. The learning proceeds with proposed model by receiving user metadata and exercise information. The model where learning is completed provides recommendation score to the user when a specific exercise is set as the input of the model. As a result of the experiment, the proposed exercise recommendation system model showed 10% improvement in recommended performance and 50% reduction in learning time compared to the existing NCF model.

Web access prediction based on parallel deep learning

  • Togtokh, Gantur;Kim, Kyung-Chang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.11
    • /
    • pp.51-59
    • /
    • 2019
  • Due to the exponential growth of access information on the web, the need for predicting web users' next access has increased. Various models such as markov models, deep neural networks, support vector machines, and fuzzy inference models were proposed to handle web access prediction. For deep learning based on neural network models, training time on large-scale web usage data is very huge. To address this problem, deep neural network models are trained on cluster of computers in parallel. In this paper, we investigated impact of several important spark parameters related to data partitions, shuffling, compression, and locality (basic spark parameters) for training Multi-Layer Perceptron model on Spark standalone cluster. Then based on the investigation, we tuned basic spark parameters for training Multi-Layer Perceptron model and used it for tuning Spark when training Multi-Layer Perceptron model for web access prediction. Through experiments, we showed the accuracy of web access prediction based on our proposed web access prediction model. In addition, we also showed performance improvement in training time based on our spark basic parameters tuning for training Multi-Layer Perceptron model over default spark parameters configuration.

Open set Object Detection combining Multi-branch Tree and ASSL (다중 분기 트리와 ASSL을 결합한 오픈 셋 물체 검출)

  • Shin, Dong-Kyun;Ahmed, Minhaz Uddin;Kim, JinWoo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.171-177
    • /
    • 2018
  • Recently there are many image datasets which has variety of data class and point to extract general features. But in order to this variety data class and point, deep learning model trained this dataset has not good performance in heterogeneous data feature local area. In this paper, we propose the structure which use sub-category and openset object detection methods to train more robust model, named multi-branch tree using ASSL. By using this structure, we can have more robust object detection deep learning model in heterogeneous data feature environment.

De-noising in Power Line Communication Using Noise Modeling Based on Deep Learning (딥 러닝 기반의 잡음 모델링을 이용한 전력선 통신에서의 잡음 제거)

  • Sun, Young-Ghyu;Hwang, Yu-Min;Sim, Issac;Kim, Jin-Young
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.4
    • /
    • pp.55-60
    • /
    • 2018
  • This paper shows the initial results of a study applying deep learning technology in power line communication. In this paper, we propose a system that effectively removes noise by applying a deep learning technique to eliminate noise, which is a cause of reduced power line communication performance, by adding a deep learning model at the receive part. To train the deep learning model, it is necessary to store the data. Therefore, it is assumed that the existing data is stored, and the proposed system is simulated. we compare the theoretical result of the additive white Gaussian noise channel with the bit error rate and confirm that the proposed system model improves the communication performance by removing the noise.

LSTM-based Early Fire Detection System using Small Amount Data

  • Seonhwa Kim;Kwangjae Lee
    • Journal of the Semiconductor & Display Technology
    • /
    • v.23 no.1
    • /
    • pp.110-116
    • /
    • 2024
  • Despite the continuous advancement of science and technology, fire accidents continue to occur without decreasing over time, so there is a constant need for a system that can accurately detect fires at an early stage. However, because most existing fire detection systems detect fire in the early stage of combustion when smoke is generated, rapid fire prevention actions may be delayed. Therefore we propose an early fire detection system that can perform early fire detection at a reasonable cost using LSTM, a deep learning model based on multi-gas sensors with high selectivity in the early stage of decomposition rather than the smoke generation stage. This system combines multiple gas sensors to achieve faster detection speeds than traditional sensors. In addition, through window sliding techniques and model light-weighting, the false alarm rate is low while maintaining the same high accuracy as existing deep learning. This shows that the proposed fire early detection system is a meaningful research in the disaster and engineering fields.

  • PDF

Tensile Properties Estimation Method Using Convolutional LSTM Model

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.43-49
    • /
    • 2018
  • In this paper, we propose a displacement measurement method based on deep learning using image data obtained from tensile tests of a material specimen. We focus on the fact that the sequential images during the tension are generated and the displacement of the specimen is represented in the image data. So, we designed sample generation model which makes sequential images of specimen. The behavior of generated images are similar to the real specimen images under tensile force. Using generated images, we trained and validated our model. In the deep neural network, sequential images are assigned to a multi-channel input to train the network. The multi-channel images are composed of sequential images obtained along the time domain. As a result, the neural network learns the temporal information as the images express the correlation with each other along the time domain. In order to verify the proposed method, we conducted experiments by comparing the deformation measuring performance of the neural network changing the displacement range of images.

IoT Device Classification According to Context-aware Using Multi-classification Model

  • Zhang, Xu;Ryu, Shinhye;Kim, Sangwook
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.3
    • /
    • pp.447-459
    • /
    • 2020
  • The Internet of Things(IoT) paradigm is flourishing strenuously for the last two decades. Researchers around the globe have their dreams to transmute every real-world object to the virtual object. Consequently, IoT devices are escalating exponentially. The abrupt evolution of these IoT devices has caused a major challenge i.e. object classification. In order to classify devices comprehensively and accurately, this paper proposes a context-aware based multi-classification model for devices, which classifies the smart devices according to people's contexts. However, the classification features of contextual data of different contexts are difficult to extract. The deep learning algorithm has the capability to solve this problem. This paper proposes a context-aware based multi-classification model of devices, which classifies the smart devices according to people's contexts.