• Title/Summary/Keyword: 게이트 순환 유닛

Search Result 14, Processing Time 0.019 seconds

Data Processing and Analysis of Non-Intrusive Electrical Appliances Load Monitoring in Smart Farm (스마트팜 개별 전기기기의 비간섭적 부하 식별 데이터 처리 및 분석)

  • Kim, Hong-Su;Kim, Ho-Chan;Kang, Min-Jae;Jwa, Jeong-Woo
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.632-637
    • /
    • 2020
  • The non-intrusive load monitoring (NILM) is an important way to cost-effective real-time monitoring the energy consumption and time of use for each appliance in a home or business using aggregated energy from a single recording meter. In this paper, we collect from the smart farm's power consumption data acquisition system to the server via an LTE modem, converted the total power consumption, and the power of individual electric devices into HDF5 format and performed NILM analysis. We perform NILM analysis using open source denoising autoencoder (DAE), long short-term memory (LSTM), gated recurrent unit (GRU), and sequence-to-point (seq2point) learning methods.

Development of Convolutional Network-based Denoising Technique using Deep Reinforcement Learning in Computed Tomography (심층강화학습을 이용한 Convolutional Network 기반 전산화단층영상 잡음 저감 기술 개발)

  • Cho, Jenonghyo;Yim, Dobin;Nam, Kibok;Lee, Dahye;Lee, Seungwan
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.7
    • /
    • pp.991-1001
    • /
    • 2020
  • Supervised deep learning technologies for improving the image quality of computed tomography (CT) need a lot of training data. When input images have different characteristics with training images, the technologies cause structural distortion in output images. In this study, an imaging model based on the deep reinforcement learning (DRL) was developed for overcoming the drawbacks of the supervised deep learning technologies and reducing noise in CT images. The DRL model was consisted of shared, value and policy networks, and the networks included convolutional layers, rectified linear unit (ReLU), dilation factors and gate rotation unit (GRU) in order to extract noise features from CT images and improve the performance of the DRL model. Also, the quality of the CT images obtained by using the DRL model was compared to that obtained by using the supervised deep learning model. The results showed that the image accuracy for the DRL model was higher than that for the supervised deep learning model, and the image noise for the DRL model was smaller than that for the supervised deep learning model. Also, the DRL model reduced the noise of the CT images, which had different characteristics with training images. Therefore, the DRL model is able to reduce image noise as well as maintain the structural information of CT images.

The Prediction of Durability Performance for Chloride Ingress in Fly Ash Concrete by Artificial Neural Network Algorithm (인공 신경망 알고리즘을 활용한 플라이애시 콘크리트의 염해 내구성능 예측)

  • Kwon, Seung-Jun;Yoon, Yong-Sik
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.5
    • /
    • pp.127-134
    • /
    • 2022
  • In this study, RCPTs (Rapid Chloride Penetration Test) were performed for fly ash concrete with curing age of 4 ~ 6 years. The concrete mixtures were prepared with 3 levels of water to binder ratio (0.37, 0.42, and 0.47) and 2 levels of substitution ratio of fly ash (0 and 30%), and the improved passed charges of chloride ion behavior were quantitatively analyzed. Additionally, the results were trained through the univariate time series models consisted of GRU (Gated Recurrent Unit) algorithm and those from the models were evaluated. As the result of the RCPT, fly ash concrete showed the reduced passed charges with period and an more improved resistance to chloride penetration than OPC concrete. At the final evaluation period (6 years), fly ash concrete showed 'Very low' grade in all W/B (water to binder) ratio, however OPC concrete showed 'Moderate' grade in the condition with the highest W/B ratio (0.47). The adopted algorithm of GRU for this study can analyze time series data and has the advantage like operation efficiency. The deep learning model with 4 hidden layers was designed, and it provided a reasonable prediction results of passed charge. The deep learning model from this study has a limitation of single consideration of a univariate time series characteristic, but it is in the developing process of providing various characteristics of concrete like strength and diffusion coefficient through additional studies.

A Fuzzy-AHP-based Movie Recommendation System using the GRU Language Model (GRU 언어 모델을 이용한 Fuzzy-AHP 기반 영화 추천 시스템)

  • Oh, Jae-Taek;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.19 no.8
    • /
    • pp.319-325
    • /
    • 2021
  • With the advancement of wireless technology and the rapid growth of the infrastructure of mobile communication technology, systems applying AI-based platforms are drawing attention from users. In particular, the system that understands users' tastes and interests and recommends preferred items is applied to advanced e-commerce customized services and smart homes. However, there is a problem that these recommendation systems are difficult to reflect in real time the preferences of various users for tastes and interests. In this research, we propose a Fuzzy-AHP-based movies recommendation system using the Gated Recurrent Unit (GRU) language model to address a problem. In this system, we apply Fuzzy-AHP to reflect users' tastes or interests in real time. We also apply GRU language model-based models to analyze the public interest and the content of the film to recommend movies similar to the user's preferred factors. To validate the performance of this recommendation system, we measured the suitability of the learning model using scraping data used in the learning module, and measured the rate of learning performance by comparing the Long Short-Term Memory (LSTM) language model with the learning time per epoch. The results show that the average cross-validation index of the learning model in this work is suitable at 94.8% and that the learning performance rate outperforms the LSTM language model.