• Title/Summary/Keyword: Learning and Memory

Search Result 1,259, Processing Time 0.031 seconds

Text Categorization with Improved Deep Learning Methods

  • Wang, Xingfeng;Kim, Hee-Cheol
    • Journal of information and communication convergence engineering
    • /
    • v.16 no.2
    • /
    • pp.106-113
    • /
    • 2018
  • Although deep learning methods of convolutional neural networks (CNNs) and long-/short-term memory (LSTM) are widely used for text categorization, they still have certain shortcomings. CNNs require that the text retain some order, that the pooling lengths be identical, and that collateral analysis is impossible; In case of LSTM, it requires the unidirectional operation and the inputs/outputs are very complex. Against these problems, we thus improved these traditional deep learning methods in the following ways: We created collateral CNNs accepting disorder and variable-length pooling, and we removed the input/output gates when creating bidirectional LSTMs. We have used four benchmark datasets for topic and sentiment classification using the new methods that we propose. The best results were obtained by combining LTSM regional embeddings with data convolution. Our method is better than all previous methods (including deep learning methods) in terms of topic and sentiment classification.

Detecting A Crypto-mining Malware By Deep Learning Analysis

  • Aljehani, Shahad;Alsuwat, Hatim
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.172-180
    • /
    • 2022
  • Crypto-mining malware (known as crypto-jacking) is a novel cyber-attack that exploits the victim's computing resources such as CPU and GPU to generate illegal cryptocurrency. The attacker get benefit from crypto-jacking by using someone else's mining hardware and their electricity power. This research focused on the possibility of detecting the potential crypto-mining malware in an environment by analyzing both static and dynamic approaches of deep learning. The Program Executable (PE) files were utilized with deep learning methods which are Long Short-Term Memory (LSTM). The finding revealed that LTSM outperformed both SVM and RF in static and dynamic approaches with percentage of 98% and 96%, respectively. Future studies will focus on detecting the malware using larger dataset to have more accurate and realistic results.

Text Classification on Social Network Platforms Based on Deep Learning Models

  • YA, Chen;Tan, Juan;Hoekyung, Jung
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • The natural language on social network platforms has a certain front-to-back dependency in structure, and the direct conversion of Chinese text into a vector makes the dimensionality very high, thereby resulting in the low accuracy of existing text classification methods. To this end, this study establishes a deep learning model that combines a big data ultra-deep convolutional neural network (UDCNN) and long short-term memory network (LSTM). The deep structure of UDCNN is used to extract the features of text vector classification. The LSTM stores historical information to extract the context dependency of long texts, and word embedding is introduced to convert the text into low-dimensional vectors. Experiments are conducted on the social network platforms Sogou corpus and the University HowNet Chinese corpus. The research results show that compared with CNN + rand, LSTM, and other models, the neural network deep learning hybrid model can effectively improve the accuracy of text classification.

Memristor Bridge Synapse-based Neural Network Circuit Design and Simulation of the Hardware-Implemented Artificial Neuron (멤리스터 브리지 시냅스 기반 신경망 회로 설계 및 하드웨어적으로 구현된 인공뉴런 시뮬레이션)

  • Yang, Chang-ju;Kim, Hyongsuk
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.5
    • /
    • pp.477-481
    • /
    • 2015
  • Implementation of memristor-based multilayer neural networks and their hardware-based learning architecture is investigated in this paper. Two major functions of neural networks which should be embedded in synapses are programmable memory and analog multiplication. "Memristor", which is a newly developed device, has two such major functions in it. In this paper, multilayer neural networks are implemented with memristors. A Random Weight Change algorithm is adopted and implemented in circuits for its learning. Its hardware-based learning on neural networks is two orders faster than its software counterpart.

Development of a Deep Learning Model for Detecting Fake Reviews Using Author Linguistic Features (작성자 언어적 특성 기반 가짜 리뷰 탐지 딥러닝 모델 개발)

  • Shin, Dong Hoon;Shin, Woo Sik;Kim, Hee Woong
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.01-23
    • /
    • 2022
  • Purpose This study aims to propose a deep learning-based fake review detection model by combining authors' linguistic features and semantic information of reviews. Design/methodology/approach This study used 358,071 review data of Yelp to develop fake review detection model. We employed linguistic inquiry and word count (LIWC) to extract 24 linguistic features of authors. Then we used deep learning architectures such as multilayer perceptron(MLP), long short-term memory(LSTM) and transformer to learn linguistic features and semantic features for fake review detection. Findings The results of our study show that detection models using both linguistic and semantic features outperformed other models using single type of features. In addition, this study confirmed that differences in linguistic features between fake reviewer and authentic reviewer are significant. That is, we found that linguistic features complement semantic information of reviews and further enhance predictive power of fake detection model.

Deep reinforcement learning for base station switching scheme with federated LSTM-based traffic predictions

  • Hyebin Park;Seung Hyun Yoon
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.379-391
    • /
    • 2024
  • To meet increasing traffic requirements in mobile networks, small base stations (SBSs) are densely deployed, overlapping existing network architecture and increasing system capacity. However, densely deployed SBSs increase energy consumption and interference. Although these problems already exist because of densely deployed SBSs, even more SBSs are needed to meet increasing traffic demands. Hence, base station (BS) switching operations have been used to minimize energy consumption while guaranteeing quality-of-service (QoS) for users. In this study, to optimize energy efficiency, we propose the use of deep reinforcement learning (DRL) to create a BS switching operation strategy with a traffic prediction model. First, a federated long short-term memory (LSTM) model is introduced to predict user traffic demands from user trajectory information. Next, the DRL-based BS switching operation scheme determines the switching operations for the SBSs using the predicted traffic demand. Experimental results confirm that the proposed scheme outperforms existing approaches in terms of energy efficiency, signal-to-interference noise ratio, handover metrics, and prediction performance.

Implementation of a Direct Learning Control Law for the Trajectory Tracking Control of a Robot (로봇의 궤적추종제어를 위한 직접학습 제어법칙의 구현)

  • Kim, Jin-Hyoung;Ahn, Hyun-Sik;Kim, Do-Hyun
    • Proceedings of the KIEE Conference
    • /
    • 2000.11d
    • /
    • pp.694-696
    • /
    • 2000
  • In this paper, the Direct Learning Control is applied to robot's trajectory tracking control to solve the problem that lies in the existing Iterative Learning Control(ILC) and the tracking Performance is analyzed and the better approach is searched using computer simulation and experiments. It is assumed that the Direct Learning Control(DLC) is saved onto memory basically after obtaining control input Profiles for several Periodic output trajectories using the ILC. In case the new output trajectory has special relations with the previous output trajectories, there is an advantage that the desired control input profile can be obtained without iterative executions only using the DLC. The robot's tracking control system is comprised of DSP chip. A/D converter, D/A converter and high-speed pulse counter included in the control board and the performance is examined by carrying out the tracking control for the given output trajectory.

  • PDF

A Comparative Study of Memory Improving Effects of Taraxaci herba on Scopolamine-induced Amnesia in Mouse (포공영 기원식물의 mouse 기억력 개선효과 비교)

  • Sohn, Moon-Ki;Shin, Yong-Wook
    • The Korea Journal of Herbology
    • /
    • v.27 no.5
    • /
    • pp.27-35
    • /
    • 2012
  • Objectives : The purpose of this study was to characterize the effect of the fraction of Taraxacum officinale and T.coreanum on the learning and memory impairments induced by scopolamine. Methods : The cognition-enhancing effect of Taraxacum officinale and T.coreanum was investigated using a passive avoidance test, the Morris water maze test and Y-maze test in mice. Drug-induced amnesia was induced by treating animals with scopolamine (1 mg/kg, i.p.). Results : The results showed that the Aug harvested T.offiicinale extract-treated group (200 mg/kg, p.o.) and the tacrine-treated group (10 mg/kg, p.o.) significantly ameliorated scopolamine-induced amnesia based on the Passive avoidance Y-maze test and Watermaze test. And these results are same manner in DPPH radical scavenger effect and Acetylcholineseterase inhibition effect. Conclusions : These results suggest that Taraxacum officinale extract maybe a useful cognitive impairment treatment, and its beneficial effects are depending on the collecting time and origin plants. As a result, Taraxacum officinale harvested in August improve memory most.

Radar Quantitative Precipitation Estimation using Long Short-Term Memory Networks

  • Thi, Linh Dinh;Yoon, Seong-Sim;Bae, Deg-Hyo
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2020.06a
    • /
    • pp.183-183
    • /
    • 2020
  • Accurate quantitative precipitation estimation plays an important role in hydrological modelling and prediction. Instantaneous quantitative precipitation estimation (QPE) by utilizing the weather radar data is a great applicability for operational hydrology in a catchment. Previously, regression technique performed between reflectivity (Z) and rain intensity (R) is used commonly to obtain radar QPEs. A novel, recent approaching method which might be applied in hydrological area for QPE is Long Short-Term Memory (LSTM) Networks. LSTM networks is a development and evolution of Recurrent Neuron Networks (RNNs) method that overcomes the limited memory capacity of RNNs and allows learning of long-term input-output dependencies. The advantages of LSTM compare to RNN technique is proven by previous works. In this study, LSTM networks is used to estimate the quantitative precipitation from weather radar for an urban catchment in South Korea. Radar information and rain-gauge data are used to evaluate and verify the estimation. The estimation results figure out that LSTM approaching method shows the accuracy and outperformance compared to Z-R relationship method. This study gives us the high potential of LSTM and its applications in urban hydrology.

  • PDF

Implications for Memory Reference Analysis and System Design to Execute AI Workloads in Personal Mobile Environments (개인용 모바일 환경의 AI 워크로드 수행을 위한 메모리 참조 분석 및 시스템 설계 방안)

  • Seokmin Kwon;Hyokyung Bahn
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.1
    • /
    • pp.31-36
    • /
    • 2024
  • Recently, mobile apps that utilize AI technologies are increasing. In the personal mobile environment, performance degradation may occur during the training phase of large AI workload due to limitations in memory capacity. In this paper, we extract memory reference traces of AI workloads and analyze their characteristics. From this analysis, we observe that AI workloads can cause frequent storage access due to weak temporal locality and irregular popularity bias during memory write operations, which can degrade the performance of mobile devices. Based on this observation, we discuss ways to efficiently manage memory write operations of AI workloads using persistent memory-based swap devices. Through simulation experiments, we show that the system architecture proposed in this paper can improve the I/O time of mobile systems by more than 80%.