• Title/Summary/Keyword: Attention

Search Result 17,751, Processing Time 0.04 seconds

The effect of focus of attention by electroencephalogram-feedback on balance in young adults

  • Lee, Dong-Yeop;Choi, Won-Jae;Lee, Seung-Won
    • Physical Therapy Rehabilitation Science
    • /
    • v.1 no.1
    • /
    • pp.13-16
    • /
    • 2012
  • Objective: Electroencephalogram (EGG)-feedback is a training procedure aimed at altering brain activity, and is used as a treatment for disorders like attention. The purpose of this study was to determine the effects of external focus of attention by EGG-feedback on balance in young adults. Design: Cross-sectional study. Methods: Subject were students in Sahmyook University. Fifty young adults in their twenties and thirties. Subjects were performed both with and without external focus of attention by EEG-feedback on the posture of standing and tandem standing. Participants were educated effort to maintain static posture when they were under internal focus of attention. Good Balance System was used for measurement of postural consistency upon the following force platforms. Results: Body sway decreased significantly both normal standing and tandem standing with external focus of attention by EEG-feedback (p<0.05). Conclusions: The results demonstrate that the benefits of an external attentional focus are generalizable to young adults. The external focus of attention outperformed the internal focus of attention on the postural balance (p<0.05). It is showed that external focus of attention significant effects on balance by revoked automatic postural control of movement. Furthermore balance might be improved by training with an external focus. Further study is required to develop for training as a method of preventing fall in elderly peoples.

  • PDF

Effects of mindfulness-based qigong for children's concentration ability (마음챙김 기공이 소아청소년의 주의집중력에 미치는 영향)

  • Hong, Soon-Sang;Cho, Seung-Hun
    • Journal of Oriental Neuropsychiatry
    • /
    • v.23 no.2
    • /
    • pp.49-58
    • /
    • 2012
  • Objectives : The purpose of this study is to examine the effects of Mindfulness-based concentration qigong for children (MBCQ-C) in healthy children with subjective poor attention. Methods : This study examined the effects of MBCQ-C on healthy children with subjective poor attention, who vistied Korean medicine hospital neuropsychiatry outpatient clinic. The MBCQ-C was practiced with 11 participants, 2 of them quit in the middle of the program, and hence, they were excluded for data analysis. MBCQ-C consisted of 8 sessions, and each session took about 60 minutes. The outcome measurement was Frankfurter Aufmerksamkeits-Inventar (FAIR), which measured selective attention, self-control and sustained attention. Results : The results of this study showed that selective attention, and sustained attention were significantly improved. Self-control also improved, but without any statistical significance. These results indicate MBCQ-C was effective for the improvement of attention abilities, but self-control, including upper cognition area needs more consistent exercise. Conclusions : The MBCQ-C consisting of 8 sessions were shown to be an effective intervention in improving the attention abilities of healthy children with subjective poor attention.

Developmental Trajectories of Attention in Normal Korean Population

  • Huh, Han Nah;Kang, Sung Hee;Hwang, Soon Young;Yoo, Hanik K.
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.30 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Objectives: This study aimed to investigate the trajectory of change of diverse attention and working memory in Koreans from 4 to 40 years of age. Methods: The data of 912 subjects from 4 to 15 years of age obtained from a previous standardization study of the computerized comprehensive attention test were merged with the newly obtained data of 150 subjects aged 16 to 40 years from this study. We evaluated the various kinds of attention, in which each subtest had five indicators. Working memory, with parameters such as number of correct responses and span, was also measured. Results: Our findings indicated that attention developed as age increased, and it decreased or was maintained after a certain age. Selective and sustained attention developed rapidly in children and adolescents, until mid-teens or 20 s when it ceased development. Divided attention, however, developed up to approximately age 20. In addition, working memory developed until mid-teens or 20 s. Conclusion: We presented the standardized data on diverse kinds of attention and working memory in children, adolescents, and adults in Korea. We could recognize any patterns of change in attention and working memory with increasing age.

Time-Series Forecasting Based on Multi-Layer Attention Architecture

  • Na Wang;Xianglian Zhao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.1-14
    • /
    • 2024
  • Time-series forecasting is extensively used in the actual world. Recent research has shown that Transformers with a self-attention mechanism at their core exhibit better performance when dealing with such problems. However, most of the existing Transformer models used for time series prediction use the traditional encoder-decoder architecture, which is complex and leads to low model processing efficiency, thus limiting the ability to mine deep time dependencies by increasing model depth. Secondly, the secondary computational complexity of the self-attention mechanism also increases computational overhead and reduces processing efficiency. To address these issues, the paper designs an efficient multi-layer attention-based time-series forecasting model. This model has the following characteristics: (i) It abandons the traditional encoder-decoder based Transformer architecture and constructs a time series prediction model based on multi-layer attention mechanism, improving the model's ability to mine deep time dependencies. (ii) A cross attention module based on cross attention mechanism was designed to enhance information exchange between historical and predictive sequences. (iii) Applying a recently proposed sparse attention mechanism to our model reduces computational overhead and improves processing efficiency. Experiments on multiple datasets have shown that our model can significantly increase the performance of current advanced Transformer methods in time series forecasting, including LogTrans, Reformer, and Informer.

Young Children's Behavioral Problems and Attention Ability by Parenting Attitude (부모의 양육태도에 따른 유아의 문제행동과 주의집중력)

  • Lee, Soeun
    • Korean Journal of Child Studies
    • /
    • v.28 no.2
    • /
    • pp.71-89
    • /
    • 2007
  • The subjects in this study of parenting attitudes, children's behavior problems and attention ability were 111 5-year-old children and their parents. Data was analyzed by mean, frequency, percent, three-way ANOVA, and Pearson's correlation. Results showed that behavior problems and attention ability of children varied by the parenting attitude of mothers and fathers. Behavior problems of boys were higher than girls, and boys' attention abilities were lower than girls.' Interaction effects were found between parenting attitudes and gender in children's behavior problems and attention ability : fathers' autonomy correlated negatively with boys' behavior problems (r=-.47), task processing speed (r=-.37), and attention inconsistency (r=-.36). Children's behavioral problems correlated positively with attention inconsistency (r=.28).

  • PDF

Crack detection based on ResNet with spatial attention

  • Yang, Qiaoning;Jiang, Si;Chen, Juan;Lin, Weiguo
    • Computers and Concrete
    • /
    • v.26 no.5
    • /
    • pp.411-420
    • /
    • 2020
  • Deep Convolution neural network (DCNN) has been widely used in the healthy maintenance of civil infrastructure. Using DCNN to improve crack detection performance has attracted many researchers' attention. In this paper, a light-weight spatial attention network module is proposed to strengthen the representation capability of ResNet and improve the crack detection performance. It utilizes attention mechanism to strengthen the interested objects in global receptive field of ResNet convolution layers. Global average spatial information over all channels are used to construct an attention scalar. The scalar is combined with adaptive weighted sigmoid function to activate the output of each channel's feature maps. Salient objects in feature maps are refined by the attention scalar. The proposed spatial attention module is stacked in ResNet50 to detect crack. Experiments results show that the proposed module can got significant performance improvement in crack detection.

Two-Dimensional Attention-Based LSTM Model for Stock Index Prediction

  • Yu, Yeonguk;Kim, Yoon-Joong
    • Journal of Information Processing Systems
    • /
    • v.15 no.5
    • /
    • pp.1231-1242
    • /
    • 2019
  • This paper presents a two-dimensional attention-based long short-memory (2D-ALSTM) model for stock index prediction, incorporating input attention and temporal attention mechanisms for weighting of important stocks and important time steps, respectively. The proposed model is designed to overcome the long-term dependency, stock selection, and stock volatility delay problems that negatively affect existing models. The 2D-ALSTM model is validated in a comparative experiment involving the two attention-based models multi-input LSTM (MI-LSTM) and dual-stage attention-based recurrent neural network (DARNN), with real stock data being used for training and evaluation. The model achieves superior performance compared to MI-LSTM and DARNN for stock index prediction on a KOSPI100 dataset.

Explaining the Translation Error Factors of Machine Translation Services Using Self-Attention Visualization (Self-Attention 시각화를 사용한 기계번역 서비스의 번역 오류 요인 설명)

  • Zhang, Chenglong;Ahn, Hyunchul
    • Journal of Information Technology Services
    • /
    • v.21 no.2
    • /
    • pp.85-95
    • /
    • 2022
  • This study analyzed the translation error factors of machine translation services such as Naver Papago and Google Translate through Self-Attention path visualization. Self-Attention is a key method of the Transformer and BERT NLP models and recently widely used in machine translation. We propose a method to explain translation error factors of machine translation algorithms by comparison the Self-Attention paths between ST(source text) and ST'(transformed ST) of which meaning is not changed, but the translation output is more accurate. Through this method, it is possible to gain explainability to analyze a machine translation algorithm's inside process, which is invisible like a black box. In our experiment, it was possible to explore the factors that caused translation errors by analyzing the difference in key word's attention path. The study used the XLM-RoBERTa multilingual NLP model provided by exBERT for Self-Attention visualization, and it was applied to two examples of Korean-Chinese and Korean-English translations.

Comparison of Pointer Network-based Dependency Parsers Depending on Attention Mechanisms (Attention Mechanism에 따른 포인터 네트워크 기반 의존 구문 분석 모델 비교)

  • Han, Mirae;Park, Seongsik;Kim, Harksoo
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.274-277
    • /
    • 2021
  • 의존 구문 분석은 문장 내 의존소와 지배소 사이의 관계를 예측하여 문장 구조를 분석하는 자연어처리 태스크이다. 최근의 딥러닝 기반 의존 구문 분석 연구는 주로 포인터 네트워크를 사용하는 방법으로 연구되고 있다. 포인터 네트워크는 내부적으로 사용하는 attention 기법에 따라 성능이 달라질 수 있다. 따라서 본 논문에서는 포인터 네트워크 모델에 적용되는 attention 기법들을 비교 분석하고, 한국어 의존 구문 분석 모델에 가장 효과적인 attention 기법을 선별한다. KLUE 데이터 셋을 사용한 실험 결과, UAS는 biaffine attention을 사용할 때 95.14%로 가장 높은 성능을 보였으며, LAS는 multi-head attention을 사용했을 때 92.85%로 가장 높은 성능을 보였다.

  • PDF

Improving Adversarial Robustness via Attention (Attention 기법에 기반한 적대적 공격의 강건성 향상 연구)

  • Jaeuk Kim;Myung Gyo Oh;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.33 no.4
    • /
    • pp.621-631
    • /
    • 2023
  • Adversarial training improves the robustness of deep neural networks for adversarial examples. However, the previous adversarial training method focuses only on the adversarial loss function, ignoring that even a small perturbation of the input layer causes a significant change in the hidden layer features. Consequently, the accuracy of a defended model is reduced for various untrained situations such as clean samples or other attack techniques. Therefore, an architectural perspective is necessary to improve feature representation power to solve this problem. In this paper, we apply an attention module that generates an attention map of an input image to a general model and performs PGD adversarial training upon the augmented model. In our experiments on the CIFAR-10 dataset, the attention augmented model showed higher accuracy than the general model regardless of the network structure. In particular, the robust accuracy of our approach was consistently higher for various attacks such as PGD, FGSM, and BIM and more powerful adversaries. By visualizing the attention map, we further confirmed that the attention module extracts features of the correct class even for adversarial examples.