• Title/Summary/Keyword: train model

Search Result 1,719, Processing Time 0.031 seconds

Region of Interest Localization for Bone Age Estimation Using Whole-Body Bone Scintigraphy

  • Do, Thanh-Cong;Yang, Hyung Jeong;Kim, Soo Hyung;Lee, Guee Sang;Kang, Sae Ryung;Min, Jung Joon
    • Smart Media Journal
    • /
    • v.10 no.2
    • /
    • pp.22-29
    • /
    • 2021
  • In the past decade, deep learning has been applied to various medical image analysis tasks. Skeletal bone age estimation is clinically important as it can help prevent age-related illness and pave the way for new anti-aging therapies. Recent research has applied deep learning techniques to the task of bone age assessment and achieved positive results. In this paper, we propose a bone age prediction method using a deep convolutional neural network. Specifically, we first train a classification model that automatically localizes the most discriminative region of an image and crops it from the original image. The regions of interest are then used as input for a regression model to estimate the age of the patient. The experiments are conducted on a whole-body scintigraphy dataset that was collected by Chonnam National University Hwasun Hospital. The experimental results illustrate the potential of our proposed method, which has a mean absolute error of 3.35 years. Our proposed framework can be used as a robust supporting tool for clinicians to prevent age-related diseases.

The Analysis of Semi-supervised Learning Technique of Deep Learning-based Classification Model (딥러닝 기반 분류 모델의 준 지도 학습 기법 분석)

  • Park, Jae Hyeon;Cho, Sung In
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.79-87
    • /
    • 2021
  • In this paper, we analysis the semi-supervised learning (SSL), which is adopted in order to train a deep learning-based classification model using the small number of labeled data. The conventional SSL techniques can be categorized into consistency regularization, entropy-based, and pseudo labeling. First, we describe the algorithm of each SSL technique. In the experimental results, we evaluate the classification accuracy of each SSL technique varying the number of labeled data. Finally, based on the experimental results, we describe the limitations of SSL technique, and suggest the research direction to improve the classification performance of SSL.

State of Health Estimation for Lithium-Ion Batteries Using Long-term Recurrent Convolutional Network (LRCN을 이용한 리튬 이온 배터리의 건강 상태 추정)

  • Hong, Seon-Ri;Kang, Moses;Jeong, Hak-Geun;Baek, Jong-Bok;Kim, Jong-Hoon
    • The Transactions of the Korean Institute of Power Electronics
    • /
    • v.26 no.3
    • /
    • pp.183-191
    • /
    • 2021
  • A battery management system (BMS) provides some functions for ensuring safety and reliability that includes algorithms estimating battery states. Given the changes caused by various operating conditions, the state-of-health (SOH), which represents a figure of merit of the battery's ability to store and deliver energy, becomes challenging to estimate. Machine learning methods can be applied to perform accurate SOH estimation. In this study, we propose a Long-Term Recurrent Convolutional Network (LRCN) that combines the Convolutional Neural Network (CNN) and Long Short-term Memory (LSTM) to extract aging characteristics and learn temporal mechanisms. The dataset collected by the battery aging experiments of NASA PCoE is used to train models. The input dataset used part of the charging profile. The accuracy of the proposed model is compared with the CNN and LSTM models using the k-fold cross-validation technique. The proposed model achieves a low RMSE of 2.21%, which shows higher accuracy than others in SOH estimation.

Knowledge-guided artificial intelligence technologies for decoding complex multiomics interactions in cells

  • Lee, Dohoon;Kim, Sun
    • Clinical and Experimental Pediatrics
    • /
    • v.65 no.5
    • /
    • pp.239-249
    • /
    • 2022
  • Cells survive and proliferate through complex interactions among diverse molecules across multiomics layers. Conventional experimental approaches for identifying these interactions have built a firm foundation for molecular biology, but their scalability is gradually becoming inadequate compared to the rapid accumulation of multiomics data measured by high-throughput technologies. Therefore, the need for data-driven computational modeling of interactions within cells has been highlighted in recent years. The complexity of multiomics interactions is primarily due to their nonlinearity. That is, their accurate modeling requires intricate conditional dependencies, synergies, or antagonisms between considered genes or proteins, which retard experimental validations. Artificial intelligence (AI) technologies, including deep learning models, are optimal choices for handling complex nonlinear relationships between features that are scalable and produce large amounts of data. Thus, they have great potential for modeling multiomics interactions. Although there exist many AI-driven models for computational biology applications, relatively few explicitly incorporate the prior knowledge within model architectures or training procedures. Such guidance of models by domain knowledge will greatly reduce the amount of data needed to train models and constrain their vast expressive powers to focus on the biologically relevant space. Therefore, it can enhance a model's interpretability, reduce spurious interactions, and prove its validity and utility. Thus, to facilitate further development of knowledge-guided AI technologies for the modeling of multiomics interactions, here we review representative bioinformatics applications of deep learning models for multiomics interactions developed to date by categorizing them by guidance mode.

ResNet-Based Simulations for a Heat-Transfer Model Involving an Imperfect Contact

  • Guangxing, Wang;Gwanghyun, Jo;Seong-Yoon, Shin
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.4
    • /
    • pp.303-308
    • /
    • 2022
  • Simulating the heat transfer in a composite material is an important topic in material science. Difficulties arise from the fact that adjacent materials cannot match perfectly, resulting in discontinuity in the temperature variables. Although there have been several numerical methods for solving the heat-transfer problem in imperfect contact conditions, the methods known so far are complicated to implement, and the computational times are non-negligible. In this study, we developed a ResNet-type deep neural network for simulating a heat transfer model in a composite material. To train the neural network, we generated datasets by numerically solving the heat-transfer equations with Kapitza thermal resistance conditions. Because datasets involve various configurations of composite materials, our neural networks are robust to the shapes of material-material interfaces. Our algorithm can predict the thermal behavior in real time once the networks are trained. The performance of the proposed neural networks is documented, where the root mean square error (RMSE) and mean absolute error (MAE) are below 2.47E-6, and 7.00E-4, respectively.

RDNN: Rumor Detection Neural Network for Veracity Analysis in Social Media Text

  • SuthanthiraDevi, P;Karthika, S
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3868-3888
    • /
    • 2022
  • A widely used social networking service like Twitter has the ability to disseminate information to large groups of people even during a pandemic. At the same time, it is a convenient medium to share irrelevant and unverified information online and poses a potential threat to society. In this research, conventional machine learning algorithms are analyzed to classify the data as either non-rumor data or rumor data. Machine learning techniques have limited tuning capability and make decisions based on their learning. To tackle this problem the authors propose a deep learning-based Rumor Detection Neural Network model to predict the rumor tweet in real-world events. This model comprises three layers, AttCNN layer is used to extract local and position invariant features from the data, AttBi-LSTM layer to extract important semantic or contextual information and HPOOL to combine the down sampling patches of the input feature maps from the average and maximum pooling layers. A dataset from Kaggle and ground dataset #gaja are used to train the proposed Rumor Detection Neural Network to determine the veracity of the rumor. The experimental results of the RDNN Classifier demonstrate an accuracy of 93.24% and 95.41% in identifying rumor tweets in real-time events.

Ship Number Recognition Method Based on An improved CRNN Model

  • Wenqi Xu;Yuesheng Liu;Ziyang Zhong;Yang Chen;Jinfeng Xia;Yunjie Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.3
    • /
    • pp.740-753
    • /
    • 2023
  • Text recognition in natural scene images is a challenging problem in computer vision. The accurate identification of ship number characters can effectively improve the level of ship traffic management. However, due to the blurring caused by motion and text occlusion, the accuracy of ship number recognition is difficult to meet the actual requirements. To solve these problems, this paper proposes a dual-branch network based on the CRNN identification network. The network couples image restoration and character recognition. The CycleGAN module is used for blur restoration branch, and the Pix2pix module is used for character occlusion branch. The two are coupled to reduce the impact of image blur and occlusion. Input the recovered image into the text recognition branch to improve the recognition accuracy. After a lot of experiments, the model is robust and easy to train. Experiments on CTW datasets and real ship maps illustrate that our method can get more accurate results.

Preprocessing performance of convolutional neural networks according to characteristic of underwater targets (수중 표적 분류를 위한 합성곱 신경망의 전처리 성능 비교)

  • Kyung-Min, Park;Dooyoung, Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.6
    • /
    • pp.629-636
    • /
    • 2022
  • We present a preprocessing method for an underwater target detection model based on a convolutional neural network. The acoustic characteristics of the ship show ambiguous expression due to the strong signal power of the low frequency. To solve this problem, we combine feature preprocessing methods with various feature scaling methods and spectrogram methods. Define a simple convolutional neural network model and train it to measure preprocessing performance. Through experiment, we found that the combination of log Mel-spectrogram and standardization and robust scaling methods gave the best classification performance.

Machine Learning Based Hybrid Approach to Detect Intrusion in Cyber Communication

  • Neha Pathak;Bobby Sharma
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.190-194
    • /
    • 2023
  • By looking the importance of communication, data delivery and access in various sectors including governmental, business and individual for any kind of data, it becomes mandatory to identify faults and flaws during cyber communication. To protect personal, governmental and business data from being misused from numerous advanced attacks, there is the need of cyber security. The information security provides massive protection to both the host machine as well as network. The learning methods are used for analyzing as well as preventing various attacks. Machine learning is one of the branch of Artificial Intelligence that plays a potential learning techniques to detect the cyber-attacks. In the proposed methodology, the Decision Tree (DT) which is also a kind of supervised learning model, is combined with the different cross-validation method to determine the accuracy and the execution time to identify the cyber-attacks from a very recent dataset of different network attack activities of network traffic in the UNSW-NB15 dataset. It is a hybrid method in which different types of attributes including Gini Index and Entropy of DT model has been implemented separately to identify the most accurate procedure to detect intrusion with respect to the execution time. The different DT methodologies including DT using Gini Index, DT using train-split method and DT using information entropy along with their respective subdivision such as using K-Fold validation, using Stratified K-Fold validation are implemented.

Learning Model for Avoiding Drowsy Driving with MoveNet and Dense Neural Network

  • Jinmo Yang;Janghwan Kim;R. Young Chul Kim;Kidu Kim
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.142-148
    • /
    • 2023
  • In Modern days, Self-driving for modern people is an absolute necessity for transportation and many other reasons. Additionally, after the outbreak of COVID-19, driving by oneself is preferred over other means of transportation for the prevention of infection. However, due to the constant exposure to stressful situations and chronic fatigue one experiences from the work or the traffic to and from it, modern drivers often drive under drowsiness which can lead to serious accidents and fatality. To address this problem, we propose a drowsy driving prevention learning model which detects a driver's state of drowsiness. Furthermore, a method to sound a warning message after drowsiness detection is also presented. This is to use MoveNet to quickly and accurately extract the keypoints of the body of the driver and Dense Neural Network(DNN) to train on real-time driving behaviors, which then immediately warns if an abnormal drowsy posture is detected. With this method, we expect reduction in traffic accident and enhancement in overall traffic safety.