• Title/Summary/Keyword: Memory-Based Learning

Search Result 560, Processing Time 0.024 seconds

Brain Activation Pattern and Functional Connectivity Network during Experimental Design on the Biological Phenomena

  • Lee, Il-Sun;Lee, Jun-Ki;Kwon, Yong-Ju
    • Journal of The Korean Association For Science Education
    • /
    • v.29 no.3
    • /
    • pp.348-358
    • /
    • 2009
  • The purpose of this study was to investigate brain activation pattern and functional connectivity network during experimental design on the biological phenomena. Twenty six right-handed healthy science teachers volunteered to be in the present study. To investigate participants' brain activities during the tasks, 3.0T fMRI system with the block experimental-design was used to measure BOLD signals of their brain and SPM2 software package was applied to analyze the acquired initial image data from the fMRI system. According to the analyzed data, superior, middle and inferior frontal gyrus, superior and inferior parietal lobule, fusiform gyrus, lingual gyrus, and bilateral cerebellum were significantly activated during participants' carrying-out experimental design. The network model was consisting of six nodes (ROIs) and its six connections. These results suggested the notion that the activation and connections of these regions mean that experimental design process couldn't succeed just a memory retrieval process. These results enable the scientific experimental design process to be examined from the cognitive neuroscience perspective, and may be used as a basis for developing a teaching-learning program for scientific experimental design such as brain-based science education curriculum.

Multi-step wind speed forecasting synergistically using generalized S-transform and improved grey wolf optimizer

  • Ruwei Ma;Zhexuan Zhu;Chunxiang Li;Liyuan Cao
    • Wind and Structures
    • /
    • v.38 no.6
    • /
    • pp.461-475
    • /
    • 2024
  • A reliable wind speed forecasting method is crucial for the applications in wind engineering. In this study, the generalized S-transform (GST) is innovatively applied for wind speed forecasting to uncover the time-frequency characteristics in the non-stationary wind speed data. The improved grey wolf optimizer (IGWO) is employed to optimize the adjustable parameters of GST to obtain the best time-frequency resolution. Then a hybrid method based on IGWO-optimized GST is proposed to validate the effectiveness and superiority for multi-step non-stationary wind speed forecasting. The historical wind speed is chosen as the first input feature, while the dynamic time-frequency characteristics obtained by IGWO-optimized GST are chosen as the second input feature. Comparative experiment with six competitors is conducted to demonstrate the best performance of the proposed method in terms of prediction accuracy and stability. The superiority of the GST compared to other time-frequency analysis methods is also discussed by another experiment. It can be concluded that the introduction of IGWO-optimized GST can deeply exploit the time-frequency characteristics and effectively improving the prediction accuracy.

Verifying Execution Prediction Model based on Learning Algorithm for Real-time Monitoring (실시간 감시를 위한 학습기반 수행 예측모델의 검증)

  • Jeong, Yoon-Seok;Kim, Tae-Wan;Chang, Chun-Hyon
    • The KIPS Transactions:PartA
    • /
    • v.11A no.4
    • /
    • pp.243-250
    • /
    • 2004
  • Monitoring is used to see if a real-time system provides a service on time. Generally, monitoring for real-time focuses on investigating the current status of a real-time system. To support a stable performance of a real-time system, it should have not only a function to see the current status of real-time process but also a function to predict executions of real-time processes, however. The legacy prediction model has some limitation to apply it to a real-time monitoring. First, it performs a static prediction after a real-time process finished. Second, it needs a statistical pre-analysis before a prediction. Third, transition probability and data about clustering is not based on the current data. We propose the execution prediction model based on learning algorithm to solve these problems and apply it to real-time monitoring. This model gets rid of unnecessary pre-processing and supports a precise prediction based on current data. In addition, this supports multi-level prediction by a trend analysis of past execution data. Most of all, We designed the model to support dynamic prediction which is performed within a real-time process' execution. The results from some experiments show that the judgment accuracy is greater than 80% if the size of a training set is set to over 10, and, in the case of the multi-level prediction, that the prediction difference of the multi-level prediction is minimized if the number of execution is bigger than the size of a training set. The execution prediction model proposed in this model has some limitation that the model used the most simplest learning algorithm and that it didn't consider the multi-regional space model managing CPU, memory and I/O data. The execution prediction model based on a learning algorithm proposed in this paper is used in some areas related to real-time monitoring and control.

CAB: Classifying Arrhythmias based on Imbalanced Sensor Data

  • Wang, Yilin;Sun, Le;Subramani, Sudha
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.7
    • /
    • pp.2304-2320
    • /
    • 2021
  • Intelligently detecting anomalies in health sensor data streams (e.g., Electrocardiogram, ECG) can improve the development of E-health industry. The physiological signals of patients are collected through sensors. Timely diagnosis and treatment save medical resources, promote physical health, and reduce complications. However, it is difficult to automatically classify the ECG data, as the features of ECGs are difficult to extract. And the volume of labeled ECG data is limited, which affects the classification performance. In this paper, we propose a Generative Adversarial Network (GAN)-based deep learning framework (called CAB) for heart arrhythmia classification. CAB focuses on improving the detection accuracy based on a small number of labeled samples. It is trained based on the class-imbalance ECG data. Augmenting ECG data by a GAN model eliminates the impact of data scarcity. After data augmentation, CAB classifies the ECG data by using a Bidirectional Long Short Term Memory Recurrent Neural Network (Bi-LSTM). Experiment results show a better performance of CAB compared with state-of-the-art methods. The overall classification accuracy of CAB is 99.71%. The F1-scores of classifying Normal beats (N), Supraventricular ectopic beats (S), Ventricular ectopic beats (V), Fusion beats (F) and Unclassifiable beats (Q) heartbeats are 99.86%, 97.66%, 99.05%, 98.57% and 99.88%, respectively. Unclassifiable beats (Q) heartbeats are 99.86%, 97.66%, 99.05%, 98.57% and 99.88%, respectively.

Proposal of a Step-by-Step Optimized Campus Power Forecast Model using CNN-LSTM Deep Learning (CNN-LSTM 딥러닝 기반 캠퍼스 전력 예측 모델 최적화 단계 제시)

  • Kim, Yein;Lee, Seeun;Kwon, Youngsung
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.10
    • /
    • pp.8-15
    • /
    • 2020
  • A forecasting method using deep learning does not have consistent results due to the differences in the characteristics of the dataset, even though they have the same forecasting models and parameters. For example, the forecasting model X optimized with dataset A would not produce the optimized result with another dataset B. The forecasting model with the characteristics of the dataset needs to be optimized to increase the accuracy of the forecasting model. Therefore, this paper proposes novel optimization steps for outlier removal, dataset classification, and a CNN-LSTM-based hyperparameter tuning process to forecast the daily power usage of a university campus based on the hourly interval. The proposing model produces high forecasting accuracy with a 2% of MAPE with a single power input variable. The proposing model can be used in EMS to suggest improved strategies to users and consequently to improve the power efficiency.

Effect of Treatment with Docosahexaenoic Acid into N-3 Fatty Acid Adequate Diet on Learning Related Brain Function in Rat (N-3계 지방산 적절 함량 식이의 docosahexaenoic acid 첨가가 기억력 관련 뇌 기능에 미치는 영향)

  • Lim, Sun-Young
    • Journal of Life Science
    • /
    • v.19 no.7
    • /
    • pp.917-922
    • /
    • 2009
  • The effect of adding docosahexaenoic acid into an n-3 fatty acid adequate diet on the improvement of learning related brain function was investigated. On the second day after conception, Sprague Dawley strain dams were subjected to a diet containing either n-3 fatty acid adequate (Adq, 3.4% linolenic acid) or n-3 fatty acid adequate+docosahexaenoic acid (Adq+DHA, 3.31%linolenic acid plus 9.65% DHA). After weaning, male pups were fed on the same diet of their respective dams until adulthood. Motor activity and Morris water maze tests were measured at 10 weeks. In the motor activity test, there were no statistically significant differences in moving time and moving distance between the Adq and Adq+DHA diet groups. The n-3 fatty acid adequate with DHA (Adq+DHA) group tended to show a shorter escape latency, swimming time and swimming distance compared to the n-3 fatty acid adequate group (Adq), but the differences were not statistically significant. There was no difference in resting time, but the Adq+DHA group showed a higher swimming speed compared to the Adq group. In memory retention trials, the numbers of crossing of the platform position (region A), in which the hidden platform was placed, were significantly greater than those of other regions for both Adq and Adq+DHA groups. Based on these results, adding DHA into the n-3 fatty acid adequate diet from gestation to adulthood tended to induce better spatial learning performance in Sprague Dawley rats as assessed by the Morris water maze test, although the difference was not significant.

Prediction of Music Generation on Time Series Using Bi-LSTM Model (Bi-LSTM 모델을 이용한 음악 생성 시계열 예측)

  • Kwangjin, Kim;Chilwoo, Lee
    • Smart Media Journal
    • /
    • v.11 no.10
    • /
    • pp.65-75
    • /
    • 2022
  • Deep learning is used as a creative tool that could overcome the limitations of existing analysis models and generate various types of results such as text, image, and music. In this paper, we propose a method necessary to preprocess audio data using the Niko's MIDI Pack sound source file as a data set and to generate music using Bi-LSTM. Based on the generated root note, the hidden layers are composed of multi-layers to create a new note suitable for the musical composition, and an attention mechanism is applied to the output gate of the decoder to apply the weight of the factors that affect the data input from the encoder. Setting variables such as loss function and optimization method are applied as parameters for improving the LSTM model. The proposed model is a multi-channel Bi-LSTM with attention that applies notes pitch generated from separating treble clef and bass clef, length of notes, rests, length of rests, and chords to improve the efficiency and prediction of MIDI deep learning process. The results of the learning generate a sound that matches the development of music scale distinct from noise, and we are aiming to contribute to generating a harmonistic stable music.

A Study on the Method of Literacy Education that Increase Interest and Learning Effect of Elderly Learners - A Case Study of Literacy Education in Chungcheongbuk-do - (중고령층 문해학습자의 흥미 유발 및 학습 효과를 높이는 문해교육 방법)

  • Kim, Young-Ok
    • Journal of Korea Entertainment Industry Association
    • /
    • v.13 no.8
    • /
    • pp.479-493
    • /
    • 2019
  • The purpose of this study was to present a method of literacy education that would generate interest and enhance the effectiveness of literacy education from elderly literacy learners. For that end, the researcher interviewed and did participation observation with a total of 11 middle and old aged literacy teachers, operators, lifelong education teachers, and literacy students in North Chungcheong Province. According to the research, elderly literacy learners have a tendency to forget easily what they have learned and to learn properly through dictation, and have high level of competitive spirit, to make studying the top priority in their daily lives. Many playful activities for knowing meaning of writing, learning connected to real life, and dictating are effective in improving their memory and cognition. In addition, using familiar materials in everyday life, conducting role plays with comedies and poems in textbooks, utilizing large-picture fairytales, team-based games and activities, learning songs and instruments to play easily, performances and presentations on the stage, and field experiences in educational and cultural facilities can increase their interest and effectiveness in literacy. Several programmes such as presentations and joint events for sharing results, materials and materials costs, education and sharing of literacy skills for teachers at the school, annual operation of literacy education need to be supported to succeed literacy education in elderly learners. In conclusion, the research shows the need to increase literacy teachers' education, to use assistant teachers, to activate accreditation of literacy curriculum.

Design and Implementation of BNN based Human Identification and Motion Classification System Using CW Radar (연속파 레이다를 활용한 이진 신경망 기반 사람 식별 및 동작 분류 시스템 설계 및 구현)

  • Kim, Kyeong-min;Kim, Seong-jin;NamKoong, Ho-jung;Jung, Yun-ho
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.211-218
    • /
    • 2022
  • Continuous wave (CW) radar has the advantage of reliability and accuracy compared to other sensors such as camera and lidar. In addition, binarized neural network (BNN) has a characteristic that dramatically reduces memory usage and complexity compared to other deep learning networks. Therefore, this paper proposes binarized neural network based human identification and motion classification system using CW radar. After receiving a signal from CW radar, a spectrogram is generated through a short-time Fourier transform (STFT). Based on this spectrogram, we propose an algorithm that detects whether a person approaches a radar. Also, we designed an optimized BNN model that can support the accuracy of 90.0% for human identification and 98.3% for motion classification. In order to accelerate BNN operation, we designed BNN hardware accelerator on field programmable gate array (FPGA). The accelerator was implemented with 1,030 logics, 836 registers, and 334.904 Kbit block memory, and it was confirmed that the real-time operation was possible with a total calculation time of 6 ms from inference to transferring result.

Korean Sentence Generation Using Phoneme-Level LSTM Language Model (한국어 음소 단위 LSTM 언어모델을 이용한 문장 생성)

  • Ahn, SungMahn;Chung, Yeojin;Lee, Jaejoon;Yang, Jiheon
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.71-88
    • /
    • 2017
  • Language models were originally developed for speech recognition and language processing. Using a set of example sentences, a language model predicts the next word or character based on sequential input data. N-gram models have been widely used but this model cannot model the correlation between the input units efficiently since it is a probabilistic model which are based on the frequency of each unit in the training set. Recently, as the deep learning algorithm has been developed, a recurrent neural network (RNN) model and a long short-term memory (LSTM) model have been widely used for the neural language model (Ahn, 2016; Kim et al., 2016; Lee et al., 2016). These models can reflect dependency between the objects that are entered sequentially into the model (Gers and Schmidhuber, 2001; Mikolov et al., 2010; Sundermeyer et al., 2012). In order to learning the neural language model, texts need to be decomposed into words or morphemes. Since, however, a training set of sentences includes a huge number of words or morphemes in general, the size of dictionary is very large and so it increases model complexity. In addition, word-level or morpheme-level models are able to generate vocabularies only which are contained in the training set. Furthermore, with highly morphological languages such as Turkish, Hungarian, Russian, Finnish or Korean, morpheme analyzers have more chance to cause errors in decomposition process (Lankinen et al., 2016). Therefore, this paper proposes a phoneme-level language model for Korean language based on LSTM models. A phoneme such as a vowel or a consonant is the smallest unit that comprises Korean texts. We construct the language model using three or four LSTM layers. Each model was trained using Stochastic Gradient Algorithm and more advanced optimization algorithms such as Adagrad, RMSprop, Adadelta, Adam, Adamax, and Nadam. Simulation study was done with Old Testament texts using a deep learning package Keras based the Theano. After pre-processing the texts, the dataset included 74 of unique characters including vowels, consonants, and punctuation marks. Then we constructed an input vector with 20 consecutive characters and an output with a following 21st character. Finally, total 1,023,411 sets of input-output vectors were included in the dataset and we divided them into training, validation, testsets with proportion 70:15:15. All the simulation were conducted on a system equipped with an Intel Xeon CPU (16 cores) and a NVIDIA GeForce GTX 1080 GPU. We compared the loss function evaluated for the validation set, the perplexity evaluated for the test set, and the time to be taken for training each model. As a result, all the optimization algorithms but the stochastic gradient algorithm showed similar validation loss and perplexity, which are clearly superior to those of the stochastic gradient algorithm. The stochastic gradient algorithm took the longest time to be trained for both 3- and 4-LSTM models. On average, the 4-LSTM layer model took 69% longer training time than the 3-LSTM layer model. However, the validation loss and perplexity were not improved significantly or became even worse for specific conditions. On the other hand, when comparing the automatically generated sentences, the 4-LSTM layer model tended to generate the sentences which are closer to the natural language than the 3-LSTM model. Although there were slight differences in the completeness of the generated sentences between the models, the sentence generation performance was quite satisfactory in any simulation conditions: they generated only legitimate Korean letters and the use of postposition and the conjugation of verbs were almost perfect in the sense of grammar. The results of this study are expected to be widely used for the processing of Korean language in the field of language processing and speech recognition, which are the basis of artificial intelligence systems.