• Title/Summary/Keyword: 심층 신뢰망

Search Result 30, Processing Time 0.023 seconds

A Study Trend on DNN security by using Trusted Execution Environment (신뢰할 수 있는 수행 환경을 활용한 DNN 보안에 관한 연구 동향)

  • Kim, Youngju;Kang, Jeong-Hwan;Kwon, Dong-Hyun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.125-127
    • /
    • 2021
  • 심층 신경망 기술은 실시간 예측 서비스를 위한 다양한 응용 분야에 적용되고 있다. 그뿐만 아니라 최근에는 민감한 개인 정보나 중요 정보들도 이러한 심층 신경망 기술을 통해 처리되면서 보안에 관한 관심이 높아지고 있다. 본 논문에서는 이러한 심층 신경망의 보안을 위해 하드웨어 기반의 안전한 수행환경에서 심층 신경망을 수행함으로써 연산 과정을 보호하는 연구들과 안전한 수행환경 내에서도 효율적인 심층 신경망 처리 기술들을 살펴볼 것이다. 그리고 이러한 연구 동향을 토대로 앞으로의 심층 신경망 연산 보호 기술의 연구 방향에 대해 논하도록 하겠다.

Artificial speech bandwidth extension technique based on opus codec using deep belief network (심층 신뢰 신경망을 이용한 오푸스 코덱 기반 인공 음성 대역 확장 기술)

  • Choi, Yoonsang;Li, Yaxing;Kang, Sangwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.70-77
    • /
    • 2017
  • Bandwidth extension is a technique to improve speech quality, intelligibility and naturalness, extending from the 300 ~ 3,400 Hz narrowband speech to the 50 ~ 7,000 Hz wideband speech. In this paper, an Artificial Bandwidth Extension (ABE) module embedded in the Opus audio decoder is designed using the information of narrowband speech to reduce the computational complexity of LPC (Linear Prediction Coding) and LSF (Line Spectral Frequencies) analysis and the algorithm delay of the ABE module. We proposed a spectral envelope extension method using DBN (Deep Belief Network), one of deep learning techniques, and the proposed scheme produces better extended spectrum than the traditional codebook mapping method.

Fire Detection using Deep Convolutional Neural Networks for Assisting People with Visual Impairments in an Emergency Situation (시각 장애인을 위한 영상 기반 심층 합성곱 신경망을 이용한 화재 감지기)

  • Kong, Borasy;Won, Insu;Kwon, Jangwoo
    • 재활복지
    • /
    • v.21 no.3
    • /
    • pp.129-146
    • /
    • 2017
  • In an event of an emergency, such as fire in a building, visually impaired and blind people are prone to exposed to a level of danger that is greater than that of normal people, for they cannot be aware of it quickly. Current fire detection methods such as smoke detector is very slow and unreliable because it usually uses chemical sensor based technology to detect fire particles. But by using vision sensor instead, fire can be proven to be detected much faster as we show in our experiments. Previous studies have applied various image processing and machine learning techniques to detect fire, but they usually don't work very well because these techniques require hand-crafted features that do not generalize well to various scenarios. But with the help of recent advancement in the field of deep learning, this research can be conducted to help solve this problem by using deep learning-based object detector that can detect fire using images from security camera. Deep learning based approach can learn features automatically so they can usually generalize well to various scenes. In order to ensure maximum capacity, we applied the latest technologies in the field of computer vision such as YOLO detector in order to solve this task. Considering the trade-off between recall vs. complexity, we introduced two convolutional neural networks with slightly different model's complexity to detect fire at different recall rate. Both models can detect fire at 99% average precision, but one model has 76% recall at 30 FPS while another has 61% recall at 50 FPS. We also compare our model memory consumption with each other and show our models robustness by testing on various real-world scenarios.

Designing SNS tourism review rating system through learning of scored review text (평점이 포함된 문장 학습을 통한 SNS 관광지 리뷰 평점 부여 시스템 설계)

  • An, Hyeon Woo;Moon, Nammee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.10a
    • /
    • pp.739-741
    • /
    • 2018
  • 감성분석을 통한 텍스트의 긍/부정 판단은 의사결정 시스템을 포함한 여러 분야에서 중요한 역할을 맡고 있다. 이런 흐름에 맞춰 감성분석 기술은 여러 기술과 융합하여 발전해왔는데 문장 내 자질을 추출하여 이용하는 자질 공학적 접근 방식과 심층 신뢰 신경망을 이용한 구조 또한 응용 사례에 속한다. 본 논문에서는 이러한 응용 기술 중 심층 신경망을 응용한 분석기술을 사용하여 관광지에 대한 평점이 포함된 문장을 학습하고 이를 SNS 관광지 리뷰에 적용하여 평점을 매기는 시스템을 설계한다.

1D CNN and Machine Learning Methods for Fall Detection (1D CNN과 기계 학습을 사용한 낙상 검출)

  • Kim, Inkyung;Kim, Daehee;Noh, Song;Lee, Jaekoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.3
    • /
    • pp.85-90
    • /
    • 2021
  • In this paper, fall detection using individual wearable devices for older people is considered. To design a low-cost wearable device for reliable fall detection, we present a comprehensive analysis of two representative models. One is a machine learning model composed of a decision tree, random forest, and Support Vector Machine(SVM). The other is a deep learning model relying on a one-dimensional(1D) Convolutional Neural Network(CNN). By considering data segmentation, preprocessing, and feature extraction methods applied to the input data, we also evaluate the considered models' validity. Simulation results verify the efficacy of the deep learning model showing improved overall performance.

Explanation-focused Adaptive Multi-teacher Knowledge Distillation (다중 신경망으로부터 해석 중심의 적응적 지식 증류)

  • Chih-Yun Li;Inwhee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.592-595
    • /
    • 2024
  • 엄청난 성능에도 불구하고, 심층 신경망은 예측결과에 대한 설명이 없는 블랙 박스로 작동한다는 비판을 받고 있다. 이러한 불투명한 표현은 신뢰성을 제한하고 모델의 대한 과학적 이해를 방해한다. 본 연구는 여러 개의 교사 신경망으로부터 설명 중심의 학생 신경망으로 지식 증류를 통해 해석 가능성을 향상시키는 것을 제안한다. 구체적으로, 인간이 정의한 개념 활성화 벡터 (CAV)를 통해 교사 모델의 개념 민감도를 방향성 도함수를 사용하여 계량화한다. 목표 개념에 대한 민감도 점수에 비례하여 교사 지식 융합을 가중치를 부여함으로써 증류된 학생 모델은 양호한 성능을 달성하면서 네트워크 논리를 해석으로 집중시킨다. 실험 결과, ResNet50, DenseNet201 및 EfficientNetV2-S 앙상블을 7 배 작은 아키텍처로 압축하여 정확도가 6% 향상되었다. 이 방법은 모델 용량, 예측 능력 및 해석 가능성 사이의 트레이드오프를 조화하고자 한다. 이는 모바일 플랫폼부터 안정성이 중요한 도메인에 걸쳐 믿을 수 있는 AI 의 미래를 여는 데 도움이 될 것이다.

Image based Concrete Compressive Strength Prediction Model using Deep Convolution Neural Network (심층 컨볼루션 신경망을 활용한 영상 기반 콘크리트 압축강도 예측 모델)

  • Jang, Youjin;Ahn, Yong Han;Yoo, Jane;Kim, Ha Young
    • Korean Journal of Construction Engineering and Management
    • /
    • v.19 no.4
    • /
    • pp.43-51
    • /
    • 2018
  • As the inventory of aged apartments is expected to increase explosively, the importance of maintenance to improve the durability of concrete facilities is increasing. Concrete compressive strength is a representative index of durability of concrete facilities, and is an important item in the precision safety diagnosis for facility maintenance. However, existing methods for measuring the concrete compressive strength and determining the maintenance of concrete facilities have limitations such as facility safety problem, high cost problem, and low reliability problem. In this study, we proposed a model that can predict the concrete compressive strength through images by using deep convolution neural network technique. Learning, validation and testing were conducted by applying the concrete compressive strength dataset constructed through the concrete specimen which is produced in the laboratory environment. As a result, it was found that the concrete compressive strength could be learned by using the images, and the validity of the proposed model was confirmed.

Real-Time Estimation of Missile Debris Predicted Impact Point and Dispersion Using Deep Neural Network (심층 신경망을 이용한 실시간 유도탄 파편 탄착점 및 분산 추정)

  • Kang, Tae Young;Park, Kuk-Kwon;Kim, Jeong-Hun;Ryoo, Chang-Kyung
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.49 no.3
    • /
    • pp.197-204
    • /
    • 2021
  • If a failure or an abnormal maneuver occurs during the flight test of a missile, the missile is deliberately self-destructed so as not to continue the flight. At this time, debris are produced and it is important to estimate the impact area in real-time whether it is out of the safety area. In this paper, we propose a method to estimate the debris dispersion area and falling time in real-time using a Fully-Connected Neural Network (FCNN). We applied the Unscented Transform (UT) to generate a large amount of training data. UT parameters were selected by comparing with Monte-Carlo (MC) simulation to secure reliability. Also, we analyzed the performance of the proposed method by comparing the estimation result of MC.

Predicting Future ESG Performance using Past Corporate Financial Information: Application of Deep Neural Networks (심층신경망을 활용한 데이터 기반 ESG 성과 예측에 관한 연구: 기업 재무 정보를 중심으로)

  • Min-Seung Kim;Seung-Hwan Moon;Sungwon Choi
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.85-100
    • /
    • 2023
  • Corporate ESG performance (environmental, social, and corporate governance) reflecting a company's strategic sustainability has emerged as one of the main factors in today's investment decisions. The traditional ESG performance rating process is largely performed in a qualitative and subjective manner based on the institution-specific criteria, entailing limitations in reliability, predictability, and timeliness when making investment decisions. This study attempted to predict the corporate ESG rating through automated machine learning based on quantitative and disclosed corporate financial information. Using 12 types (21,360 cases) of market-disclosed financial information and 1,780 ESG measures available through the Korea Institute of Corporate Governance and Sustainability during 2019 to 2021, we suggested a deep neural network prediction model. Our model yielded about 86% of accurate classification performance in predicting ESG rating, showing better performance than other comparative models. This study contributed the literature in a way that the model achieved relatively accurate ESG rating predictions through an automated process using quantitative and publicly available corporate financial information. In terms of practical implications, the general investors can benefit from the prediction accuracy and time efficiency of our proposed model with nominal cost. In addition, this study can be expanded by accumulating more Korean and international data and by developing a more robust and complex model in the future.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.