• Title/Summary/Keyword: neural network optimization

Search Result 816, Processing Time 0.029 seconds

Extraction of the OLED Device Parameter based on Randomly Generated Monte Carlo Simulation with Deep Learning (무작위 생성 심층신경망 기반 유기발광다이오드 흑점 성장가속 전산모사를 통한 소자 변수 추출)

  • You, Seung Yeol;Park, Il-Hoo;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.131-135
    • /
    • 2021
  • Numbers of studies related to optimization of design of organic light emitting diodes(OLED) through machine learning are increasing. We propose the generative method of the image to assess the performance of the device combining with machine learning technique. Principle parameter regarding dark spot growth mechanism of the OLED can be the key factor to determine the long-time performance. Captured images from actual device and randomly generated images at specific time and initial pinhole state are fed into the deep neural network system. The simulation reinforced by the machine learning technique can predict the device parameters accurately and faster. Similarly, the inverse design using multiple layer perceptron(MLP) system can infer the initial degradation factors at manufacturing with given device parameter to feedback the design of manufacturing process.

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

Application of artificial intelligence for solving the engineering problems

  • Xiaofei Liu;Xiaoli Wang
    • Structural Engineering and Mechanics
    • /
    • v.85 no.1
    • /
    • pp.15-27
    • /
    • 2023
  • Using artificial intelligence and internet of things methods in engineering and industrial problems has become a widespread method in recent years. The low computational costs and high accuracy without the need to engage human resources in comparison to engineering demands are the main advantages of artificial intelligence. In the present paper, a deep neural network (DNN) with a specific method of optimization is utilize to predict fundamental natural frequency of a cylindrical structure. To provide data for training the DNN, a detailed numerical analysis is presented with the aid of functionally modified couple stress theory (FMCS) and first-order shear deformation theory (FSDT). The governing equations obtained using Hamilton's principle, are further solved engaging generalized differential quadrature method. The results of the numerical solution are utilized to train and test the DNN model. The results are validated at the first step and a comprehensive parametric results are presented thereafter. The results show the high accuracy of the DNN results and effects of different geometrical, modeling and material parameters in the natural frequencies of the structure.

Comparison on of Activation Functions for Shrinkage Prediction Model using DNN (DNN을 활용한 콘크리트 건조수축 예측 모델의 활성화 함수 비교분석)

  • Han, Jun-Hui;Kim, Su-Hoo;Han, Soo-Hwan;Beak, Sung-Jin;Kim, Jong;Han, Min-Cheol
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.11a
    • /
    • pp.121-122
    • /
    • 2022
  • In this study, compared and analyzed various Activation Functions to present a methodology for developing a natural intelligence-based prediction system. As a result of the analysis, ELU was the best with RMSE: 62.87, R2: 0.96, and the error rate was 4%. However, it is considered desirable to construct a prediction system by combining each algorithm model for optimization.

  • PDF

Effect of preparation of organic ferroelectric P(VDF-TrFE) nanostructure on the improvement of tennis performance

  • Qingyu Wang
    • Advances in nano research
    • /
    • v.14 no.4
    • /
    • pp.329-334
    • /
    • 2023
  • Organic ferroelectric material found vast application in a verity of engineering and health technology fields. In the present study, we investigated the application of the deformable organic ferroelectric in motion measurement and improving performance in tennis players. Flexible ferroelectric material P(VDF-TrFE) could be used in wearable motion sensors in tennis player transferring velocity and acceleration data to collecting devises for analyzing the best pose and movements in tennis players to achieve best performances in terms of hitting ball and movement across the tennis court. In doing so, ferroelectric-based wearable sensors are used in four different locations on the player body to analyze the movement and also a sensor on the tennis ball to record the velocity and acceleration. In addition, poses of tennis players were analyzed to find out the best pose to achieve best acceleration and movement. The results indicated that organic ferroelectric-based sensors could be used effectively in sensing motion of tennis player which could be utilized in the optimization of posing and ball hitting in the real games.

Comparison on of Minimization of Loos function for strength Prediction Model using DNN (DNN을 활용한 강도예측모델의 손실함수 최소화 기법 비교분석)

  • Han, Jun-Hui;Kim, Su-Hoo;Beak, Sung-Jin;Han, Soo-Hwan;Kim, Jong;Han, Min-Cheol
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2022.04a
    • /
    • pp.182-183
    • /
    • 2022
  • In this study, compared and analyzed various loss function minimization techniques to present a methodology for developing a natural intelligence-based prediction system. As a result of the analysis, He Initialization was the best with RMSE: 3.78, R2: 0.94, and the error rate was 6%. However, it is considered desirable to construct a prediction system by combining each technique for optimization.

  • PDF

Optimization of Model based on Relu Activation Function in MLP Neural Network Model

  • Ye Rim Youn;Jinkeun Hong
    • International journal of advanced smart convergence
    • /
    • v.13 no.2
    • /
    • pp.80-87
    • /
    • 2024
  • This paper focuses on improving accuracy in constrained computing settings by employing the ReLU (Rectified Linear Unit) activation function. The research conducted involves modifying parameters of the ReLU function and comparing performance in terms of accuracy and computational time. This paper specifically focuses on optimizing ReLU in the context of a Multilayer Perceptron (MLP) by determining the ideal values for features such as the dimensions of the linear layers and the learning rate (Ir). In order to optimize performance, the paper experiments with adjusting parameters like the size dimensions of linear layers and Ir values to induce the best performance outcomes. The experimental results show that using ReLU alone yielded the highest accuracy of 96.7% when the dimension sizes were 30 - 10 and the Ir value was 1. When combining ReLU with the Adam optimizer, the optimal model configuration had dimension sizes of 60 - 40 - 10, and an Ir value of 0.001, which resulted in the highest accuracy of 97.07%.

A hybrid algorithm for the synthesis of computer-generated holograms

  • Nguyen The Anh;An Jun Won;Choe Jae Gwang;Kim Nam
    • Proceedings of the Optical Society of Korea Conference
    • /
    • 2003.07a
    • /
    • pp.60-61
    • /
    • 2003
  • A new approach to reduce the computation time of genetic algorithm (GA) for making binary phase holograms is described. Synthesized holograms having diffraction efficiency of 75.8% and uniformity of 5.8% are proven in computer simulation and experimentally demonstrated. Recently, computer-generated holograms (CGHs) having high diffraction efficiency and flexibility of design have been widely developed in many applications such as optical information processing, optical computing, optical interconnection, etc. Among proposed optimization methods, GA has become popular due to its capability of reaching nearly global. However, there exits a drawback to consider when we use the genetic algorithm. It is the large amount of computation time to construct desired holograms. One of the major reasons that the GA' s operation may be time intensive results from the expense of computing the cost function that must Fourier transform the parameters encoded on the hologram into the fitness value. In trying to remedy this drawback, Artificial Neural Network (ANN) has been put forward, allowing CGHs to be created easily and quickly (1), but the quality of reconstructed images is not high enough to use in applications of high preciseness. For that, we are in attempt to find a new approach of combiningthe good properties and performance of both the GA and ANN to make CGHs of high diffraction efficiency in a short time. The optimization of CGH using the genetic algorithm is merely a process of iteration, including selection, crossover, and mutation operators [2]. It is worth noting that the evaluation of the cost function with the aim of selecting better holograms plays an important role in the implementation of the GA. However, this evaluation process wastes much time for Fourier transforming the encoded parameters on the hologram into the value to be solved. Depending on the speed of computer, this process can even last up to ten minutes. It will be more effective if instead of merely generating random holograms in the initial process, a set of approximately desired holograms is employed. By doing so, the initial population will contain less trial holograms equivalent to the reduction of the computation time of GA's. Accordingly, a hybrid algorithm that utilizes a trained neural network to initiate the GA's procedure is proposed. Consequently, the initial population contains less random holograms and is compensated by approximately desired holograms. Figure 1 is the flowchart of the hybrid algorithm in comparison with the classical GA. The procedure of synthesizing a hologram on computer is divided into two steps. First the simulation of holograms based on ANN method [1] to acquire approximately desired holograms is carried. With a teaching data set of 9 characters obtained from the classical GA, the number of layer is 3, the number of hidden node is 100, learning rate is 0.3, and momentum is 0.5, the artificial neural network trained enables us to attain the approximately desired holograms, which are fairly good agreement with what we suggested in the theory. The second step, effect of several parameters on the operation of the hybrid algorithm is investigated. In principle, the operation of the hybrid algorithm and GA are the same except the modification of the initial step. Hence, the verified results in Ref [2] of the parameters such as the probability of crossover and mutation, the tournament size, and the crossover block size are remained unchanged, beside of the reduced population size. The reconstructed image of 76.4% diffraction efficiency and 5.4% uniformity is achieved when the population size is 30, the iteration number is 2000, the probability of crossover is 0.75, and the probability of mutation is 0.001. A comparison between the hybrid algorithm and GA in term of diffraction efficiency and computation time is also evaluated as shown in Fig. 2. With a 66.7% reduction in computation time and a 2% increase in diffraction efficiency compared to the GA method, the hybrid algorithm demonstrates its efficient performance. In the optical experiment, the phase holograms were displayed on a programmable phase modulator (model XGA). Figures 3 are pictures of diffracted patterns of the letter "0" from the holograms generated using the hybrid algorithm. Diffraction efficiency of 75.8% and uniformity of 5.8% are measured. We see that the simulation and experiment results are fairly good agreement with each other. In this paper, Genetic Algorithm and Neural Network have been successfully combined in designing CGHs. This method gives a significant reduction in computation time compared to the GA method while still allowing holograms of high diffraction efficiency and uniformity to be achieved. This work was supported by No.mOl-2001-000-00324-0 (2002)) from the Korea Science & Engineering Foundation.

  • PDF

The Optimal Extraction Method of Adder Sharing Component for Inner Product and its Application to DCT Design (내적연산을 위한 가산기 공유항의 최적 추출기법 제안 및 이를 이용한 DCT 설계)

  • Im, Guk-Chan;Jang, Yeong-Jin;Lee, Hyeon-Su
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.38 no.7
    • /
    • pp.503-512
    • /
    • 2001
  • The general DSP algorithm, like orthogonal transform or filter processing, needs efficient hardware architecture to compute inner product. The typical MAC architecture has high cost of silicon. Because of this reason, the distributed arithmetic without multiplier is widely used for implementing inner product. This paper presents the optimization to reduce required hardware in distributed arithmetic by using extraction method of adder sharing component. The optimization process uses Boltzmann-machine which is one of the neural network. This proposed method can solve problem that is increasing complexity depending on depth of inner product and compose optimal summation-network with the minimum FA and FF in a few time. The designed DCT by using Proposed method is more efficient than a ROM-based distributed arithmetic.

  • PDF

Parameter Analysis for Super-Resolution Network Model Optimization of LiDAR Intensity Image (LiDAR 반사 강도 영상의 초해상화 신경망 모델 최적화를 위한 파라미터 분석)

  • Seungbo Shim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.137-147
    • /
    • 2023
  • LiDAR is used in autonomous driving and various industrial fields to measure the size and distance of an object. In addition, the sensor also provides intensity images based on the amount of reflected light. This has a positive effect on sensor data processing by providing information on the shape of the object. LiDAR guarantees higher performance as the resolution increases but at an increased cost. These conditions also apply to LiDAR intensity images. Expensive equipment is essential to acquire high-resolution LiDAR intensity images. This study developed artificial intelligence to improve low-resolution LiDAR intensity images into high-resolution ones. Therefore, this study performed parameter analysis for the optimal super-resolution neural network model. The super-resolution algorithm was trained and verified using 2,500 LiDAR intensity images. As a result, the resolution of the intensity images were improved. These results can be applied to the autonomous driving field and help improve driving environment recognition and obstacle detection performance