• Title/Summary/Keyword: artificial neural net

Search Result 157, Processing Time 0.029 seconds

A Computer Aided Diagnosis Algorithm for Classification of Malignant Melanoma based on Deep Learning (딥 러닝 기반의 악성흑색종 분류를 위한 컴퓨터 보조진단 알고리즘)

  • Lim, Sangheon;Lee, Myungsuk
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.14 no.4
    • /
    • pp.69-77
    • /
    • 2018
  • The malignant melanoma accounts for about 1 to 3% of the total malignant tumor in the West, especially in the US, it is a disease that causes more than 9,000 deaths each year. Generally, skin lesions are difficult to detect the features through photography. In this paper, we propose a computer-aided diagnosis algorithm based on deep learning for classification of malignant melanoma and benign skin tumor in RGB channel skin images. The proposed deep learning model configures the tumor lesion segmentation model and a classification model of malignant melanoma. First, U-Net was used to segment a skin lesion area in the dermoscopic image. We could implement algorithms to classify malignant melanoma and benign tumor using skin lesion image and results of expert's labeling in ResNet. The U-Net model obtained a dice similarity coefficient of 83.45% compared with results of expert's labeling. The classification accuracy of malignant melanoma obtained the 83.06%. As the result, it is expected that the proposed artificial intelligence algorithm will utilize as a computer-aided diagnosis algorithm and help to detect malignant melanoma at an early stage.

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • v.12 no.2
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

Research on the Main Memory Access Count According to the On-Chip Memory Size of an Artificial Neural Network (인공 신경망 가속기 온칩 메모리 크기에 따른 주메모리 접근 횟수 추정에 대한 연구)

  • Cho, Seok-Jae;Park, Sungkyung;Park, Chester Sungchung
    • Journal of IKEEE
    • /
    • v.25 no.1
    • /
    • pp.180-192
    • /
    • 2021
  • One widely used algorithm for image recognition and pattern detection is the convolution neural network (CNN). To efficiently handle convolution operations, which account for the majority of computations in the CNN, we use hardware accelerators to improve the performance of CNN applications. In using these hardware accelerators, the CNN fetches data from the off-chip DRAM, as the massive computational volume of data makes it difficult to derive performance improvements only from memory inside the hardware accelerator. In other words, data communication between off-chip DRAM and memory inside the accelerator has a significant impact on the performance of CNN applications. In this paper, a simulator for the CNN is developed to analyze the main memory or DRAM access with respect to the size of the on-chip memory or global buffer inside the CNN accelerator. For AlexNet, one of the CNN architectures, when simulated with increasing the size of the global buffer, we found that the global buffer of size larger than 100kB has 0.8x as low a DRAM access count as the global buffer of size smaller than 100kB.

PREDICTION OF THE REACTOR VESSEL WATER LEVEL USING FUZZY NEURAL NETWORKS IN SEVERE ACCIDENT CIRCUMSTANCES OF NPPS

  • Park, Soon Ho;Kim, Dae Seop;Kim, Jae Hwan;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.46 no.3
    • /
    • pp.373-380
    • /
    • 2014
  • Safety-related parameters are very important for confirming the status of a nuclear power plant. In particular, the reactor vessel water level has a direct impact on the safety fortress by confirming reactor core cooling. In this study, the reactor vessel water level under the condition of a severe accident, where the water level could not be measured, was predicted using a fuzzy neural network (FNN). The prediction model was developed using training data, and validated using independent test data. The data was generated from simulations of the optimized power reactor 1000 (OPR1000) using MAAP4 code. The informative data for training the FNN model was selected using the subtractive clustering method. The prediction performance of the reactor vessel water level was quite satisfactory, but a few large errors were occasionally observed. To check the effect of instrument errors, the prediction model was verified using data containing artificially added errors. The developed FNN model was sufficiently accurate to be used to predict the reactor vessel water level in severe accident situations where the integrity of the reactor vessel water level sensor is compromised. Furthermore, if the developed FNN model can be optimized using a variety of data, it should be possible to predict the reactor vessel water level precisely.

Estimation of LOCA Break Size Using Cascaded Fuzzy Neural Networks

  • Choi, Geon Pil;Yoo, Kwae Hwan;Back, Ju Hyun;Na, Man Gyun
    • Nuclear Engineering and Technology
    • /
    • v.49 no.3
    • /
    • pp.495-503
    • /
    • 2017
  • Operators of nuclear power plants may not be equipped with sufficient information during a loss-of-coolant accident (LOCA), which can be fatal, or they may not have sufficient time to analyze the information they do have, even if this information is adequate. It is not easy to predict the progression of LOCAs in nuclear power plants. Therefore, accurate information on the LOCA break position and size should be provided to efficiently manage the accident. In this paper, the LOCA break size is predicted using a cascaded fuzzy neural network (CFNN) model. The input data of the CFNN model are the time-integrated values of each measurement signal for an initial short-time interval after a reactor scram. The training of the CFNN model is accomplished by a hybrid method combined with a genetic algorithm and a least squares method. As a result, LOCA break size is estimated exactly by the proposed CFNN model.

A comparative study of machine learning methods for automated identification of radioisotopes using NaI gamma-ray spectra

  • Galib, S.M.;Bhowmik, P.K.;Avachat, A.V.;Lee, H.K.
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4072-4079
    • /
    • 2021
  • This article presents a study on the state-of-the-art methods for automated radioactive material detection and identification, using gamma-ray spectra and modern machine learning methods. The recent developments inspired this in deep learning algorithms, and the proposed method provided better performance than the current state-of-the-art models. Machine learning models such as: fully connected, recurrent, convolutional, and gradient boosted decision trees, are applied under a wide variety of testing conditions, and their advantage and disadvantage are discussed. Furthermore, a hybrid model is developed by combining the fully-connected and convolutional neural network, which shows the best performance among the different machine learning models. These improvements are represented by the model's test performance metric (i.e., F1 score) of 93.33% with an improvement of 2%-12% than the state-of-the-art model at various conditions. The experimental results show that fusion of classical neural networks and modern deep learning architecture is a suitable choice for interpreting gamma spectra data where real-time and remote detection is necessary.

Prediction of critical heat flux for narrow rectangular channels in a steady state condition using machine learning

  • Kim, Huiyung;Moon, Jeongmin;Hong, Dongjin;Cha, Euiyoung;Yun, Byongjo
    • Nuclear Engineering and Technology
    • /
    • v.53 no.6
    • /
    • pp.1796-1809
    • /
    • 2021
  • The subchannel of a research reactor used to generate high power density is designed to be narrow and rectangular and comprises plate-type fuels operating under downward flow conditions. Critical heat flux (CHF) is a crucial parameter for estimating the safety of a nuclear fuel; hence, this parameter should be accurately predicted. Here, machine learning is applied for the prediction of CHF in a narrow rectangular channel. Although machine learning can effectively analyze large amounts of complex data, its application to CHF, particularly for narrow rectangular channels, remains challenging because of the limited flow conditions available in existing experimental databases. To resolve this problem, we used four CHF correlations to generate pseudo-data for training an artificial neural network. We also propose a network architecture that includes pre-training and prediction stages to predict and analyze the CHF. The trained neural network predicted the CHF with an average error of 3.65% and a root-mean-square error of 17.17% for the test pseudo-data; the respective errors of 0.9% and 26.4% for the experimental data were not considered during training. Finally, machine learning was applied to quantitatively investigate the parametric effect on the CHF in narrow rectangular channels under downward flow conditions.

Multi-objective optimization of printed circuit heat exchanger with airfoil fins based on the improved PSO-BP neural network and the NSGA-II algorithm

  • Jiabing Wang;Linlang Zeng;Kun Yang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.6
    • /
    • pp.2125-2138
    • /
    • 2023
  • The printed circuit heat exchanger (PCHE) with airfoil fins has the benefits of high compactness, high efficiency and superior heat transfer performance. A novel multi-objective optimization approach is presented to design the airfoil fin PCHE in this paper. Three optimization design variables (the vertical number, the horizontal number and the staggered number) are obtained by means of dimensionless airfoil fin arrangement parameters. And the optimization objective is to maximize the Nusselt number (Nu) and minimize the Fanning friction factor (f). Firstly, in order to investigate the impact of design variables on the thermal-hydraulic performance, a parametric study via the design of experiments is proposed. Subsequently, the relationships between three optimization design variables and two objective functions (Nu and f) are characterized by an improved particle swarm optimization-backpropagation artificial neural network. Finally, a multi-objective optimization is used to construct the Pareto optimal front, in which the non-dominated sorting genetic algorithm II is used. The comprehensive performance is found to be the best when the airfoil fins are completely staggered arrangement. And the best compromise solution based on the TOPSIS method is identified as the optimal solution, which can achieve the requirement of high heat transfer performance and low flow resistance.

An intelligent optimization method for the HCSB blanket based on an improved multi-objective NSGA-III algorithm and an adaptive BP neural network

  • Wen Zhou;Guomin Sun;Shuichiro Miwa;Zihui Yang;Zhuang Li;Di Zhang;Jianye Wang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3150-3163
    • /
    • 2023
  • To improve the performance of blanket: maximizing the tritium breeding rate (TBR) for tritium self-sufficiency, and minimizing the Dose of backplate for radiation protection, most previous studies are based on manual corrections to adjust the blanket structure to achieve optimization design, but it is difficult to find an optimal structure and tends to be trapped by local optimizations as it involves multiphysics field design, which is also inefficient and time-consuming process. The artificial intelligence (AI) maybe is a potential method for the optimization design of the blanket. So, this paper aims to develop an intelligent optimization method based on an improved multi-objective NSGA-III algorithm and an adaptive BP neural network to solve these problems mentioned above. This method has been applied on optimizing the radial arrangement of a conceptual design of CFETR HCSB blanket. Finally, a series of optimal radial arrangements are obtained under the constraints that the temperature of each component of the blanket does not exceed the limit and the radial length remains unchanged, the efficiency of the blanket optimization design is significantly improved. This study will provide a clue and inspiration for the application of artificial intelligence technology in the optimization design of blanket.

Physics informed neural networks for surrogate modeling of accidental scenarios in nuclear power plants

  • Federico Antonello;Jacopo Buongiorno;Enrico Zio
    • Nuclear Engineering and Technology
    • /
    • v.55 no.9
    • /
    • pp.3409-3416
    • /
    • 2023
  • Licensing the next-generation of nuclear reactor designs requires extensive use of Modeling and Simulation (M&S) to investigate system response to many operational conditions, identify possible accidental scenarios and predict their evolution to undesirable consequences that are to be prevented or mitigated via the deployment of adequate safety barriers. Deep Learning (DL) and Artificial Intelligence (AI) can support M&S computationally by providing surrogates of the complex multi-physics high-fidelity models used for design. However, DL and AI are, generally, low-fidelity 'black-box' models that do not assure any structure based on physical laws and constraints, and may, thus, lack interpretability and accuracy of the results. This poses limitations on their credibility and doubts about their adoption for the safety assessment and licensing of novel reactor designs. In this regard, Physics Informed Neural Networks (PINNs) are receiving growing attention for their ability to integrate fundamental physics laws and domain knowledge in the neural networks, thus assuring credible generalization capabilities and credible predictions. This paper presents the use of PINNs as surrogate models for accidental scenarios simulation in Nuclear Power Plants (NPPs). A case study of a Loss of Heat Sink (LOHS) accidental scenario in a Nuclear Battery (NB), a unique class of transportable, plug-and-play microreactors, is considered. A PINN is developed and compared with a Deep Neural Network (DNN). The results show the advantages of PINNs in providing accurate solutions, avoiding overfitting, underfitting and intrinsically ensuring physics-consistent results.