• Title/Summary/Keyword: Memory Improvement

Search Result 698, Processing Time 0.036 seconds

Application of Cognitive Enhancement Protocol Based on Information & Communication Technology Program to Improve Cognitive Level of Older Adults Residents in Small-Sized City Community: A Pilot Study (중소도시 지역사회 거주 노인의 치매예방을 위한 Information & Communication Technology 프로그램 기반 인지향상 프로토콜 적용: 파일럿(Pilot) 연구)

  • Yun, Sohyeon;Lee, Hamin;Kim, Mi Kyeong;Park, Hae Yean
    • Therapeutic Science for Rehabilitation
    • /
    • v.12 no.2
    • /
    • pp.69-83
    • /
    • 2023
  • Objective : This study, as a preliminary study, applied an Information & Communication Technology (ICT) home-based program to elderly people aged 65 years or older to confirm the effect of the cognitive enhancement program and to find the possibility of remote rehabilitation. Methods : This study from August to October 2022, three subjects were selected and the intervention was conducted for about 2 months. This intervention was conducted using Korean version of Mini-Mental State Examination, Korean version of Montreal Cognitive Assessment (MoCA-K), Computer Cognitive Senior Assessment System, and the Center for Epidemiologic Studies Depression scale to evaluate cognitive improvement before and after the program. The therapist remotely set the level of cognitive training according to the subject's level through weekly feedback. Results : After the intervention, all subjects showed improved scores in most items of the MoCA-K conducted before and after the intervention. In addition, among the items of Cotras-pro, upper cognition, language ability, attention, visual perception, and memory were improved. Conclusion : Cognitive rehabilitation training using an ICT home-based program not only prevented dementia but also made it habitual. Through this study, it was confirmed that remote rehabilitation for the elderly could be possible.

Added Value of Chemical Exchange-Dependent Saturation Transfer MRI for the Diagnosis of Dementia

  • Jang-Hoon Oh;Bo Guem Choi;Hak Young Rhee;Jin San Lee;Kyung Mi Lee;Soonchan Park;Ah Rang Cho;Chang-Woo Ryu;Key Chung Park;Eui Jong Kim;Geon-Ho Jahng
    • Korean Journal of Radiology
    • /
    • v.22 no.5
    • /
    • pp.770-781
    • /
    • 2021
  • Objective: Chemical exchange-dependent saturation transfer (CEST) MRI is sensitive for detecting solid-like proteins and may detect changes in the levels of mobile proteins and peptides in tissues. The objective of this study was to evaluate the characteristics of chemical exchange proton pools using the CEST MRI technique in patients with dementia. Materials and Methods: Our institutional review board approved this cross-sectional prospective study and informed consent was obtained from all participants. This study included 41 subjects (19 with dementia and 22 without dementia). Complete CEST data of the brain were obtained using a three-dimensional gradient and spin-echo sequence to map CEST indices, such as amide, amine, hydroxyl, and magnetization transfer ratio asymmetry (MTRasym) values, using six-pool Lorentzian fitting. Statistical analyses of CEST indices were performed to evaluate group comparisons, their correlations with gray matter volume (GMV) and Mini-Mental State Examination (MMSE) scores, and receiver operating characteristic (ROC) curves. Results: Amine signals (0.029 for non-dementia, 0.046 for dementia, p = 0.011 at hippocampus) and MTRasym values at 3 ppm (0.748 for non-dementia, 1.138 for dementia, p = 0.022 at hippocampus), and 3.5 ppm (0.463 for non-dementia, 0.875 for dementia, p = 0.029 at hippocampus) were significantly higher in the dementia group than in the non-dementia group. Most CEST indices were not significantly correlated with GMV; however, except amide, most indices were significantly correlated with the MMSE scores. The classification power of most CEST indices was lower than that of GMV but adding one of the CEST indices in GMV improved the classification between the subject groups. The largest improvement was seen in the MTRasym values at 2 ppm in the anterior cingulate (area under the ROC curve = 0.981), with a sensitivity of 100 and a specificity of 90.91. Conclusion: CEST MRI potentially allows noninvasive image alterations in the Alzheimer's disease brain without injecting isotopes for monitoring different disease states and may provide a new imaging biomarker in the future.

An efficient interconnection network topology in dual-link CC-NUMA systems (이중 연결 구조 CC-NUMA 시스템의 효율적인 상호 연결망 구성 기법)

  • Suh, Hyo-Joong
    • The KIPS Transactions:PartA
    • /
    • v.11A no.1
    • /
    • pp.49-56
    • /
    • 2004
  • The performance of the multiprocessor systems is limited by the several factors. The system performance is affected by the processor speed, memory delay, and interconnection network bandwidth/latency. By the evolution of semiconductor technology, off the shelf microprocessor speed breaks beyond GHz, and the processors can be scalable up to multiprocessor system by connecting through the interconnection networks. In this situation, the system performances are bound by the latencies and the bandwidth of the interconnection networks. SCI, Myrinet, and Gigabit Ethernet are widely adopted as a high-speed interconnection network links for the high performance cluster systems. Performance improvement of the interconnection network can be achieved by the bandwidth extension and the latency minimization. Speed up of the operation clock speed is a simple way to accomplish the bandwidth and latency betterment, while its physical distance makes the difficulties to attain the high frequency clock. Hence the system performance and scalability suffered from the interconnection network limitation. Duplicating the link of the interconnection network is one of the solutions to resolve the bottleneck of the scalable systems. Dual-ring SCI link structure is an example of the interconnection network improvement. In this paper, I propose a network topology and a transaction path algorism, which optimize the latency and the efficiency under the duplicated links. By the simulation results, the proposed structure shows 1.05 to 1.11 times better latency, and exhibits 1.42 to 2.1 times faster execution compared to the dual ring systems.

Timing Driven Analytic Placement for FPGAs (타이밍 구동 FPGA 분석적 배치)

  • Kim, Kyosun
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.7
    • /
    • pp.21-28
    • /
    • 2017
  • Practical models for FPGA architectures which include performance- and/or density-enhancing components such as carry chains, wide function multiplexers, and memory/multiplier blocks are being applied to academic FPGA placement tools which used to rely on simple imaginary models. Previously the techniques such as pre-packing and multi-layer density analysis are proposed to remedy issues related to such practical models, and the wire length is effectively minimized during initial analytic placement. Since timing should be optimized rather than wire length, most previous work takes into account the timing constraints. However, instead of the initial analytic placement, the timing-driven techniques are mostly applied to subsequent steps such as placement legalization and iterative improvement. This paper incorporates the timing driven techniques, which check if the placement meets the timing constraints given in the standard SDC format, and minimize the detected violations, with the existing analytic placer which implements pre-packing and multi-layer density analysis. First of all, a static timing analyzer has been used to check the timing of the wire-length minimized placement results. In order to minimize the detected violations, a function to minimize the largest arrival time at end points is added to the objective function of the analytic placer. Since each clock has a different period, the function is proposed to be evaluated for each clock, and added to the objective function. Since this function can unnecessarily reduce the unviolated paths, a new function which calculates and minimizes the largest negative slack at end points is also proposed, and compared. Since the existing legalization which is non-timing driven is used before the timing analysis, any improvement on timing is entirely due to the functions added to the objective function. The experiments on twelve industrial examples show that the minimum arrival time function improves the worst negative slack by 15% on average whereas the minimum worst negative slack function improves the negative slacks by additional 6% on average.

Neuroprotective Effects of Modified Yuldahanso-tang (MYH) in a Parkinson's Disease Mouse Model (MPTP로 유도된 Parkinson's disease 동물 모델에서 열다한소탕 가감방 (MYH)의 신경 세포 보호 효과)

  • Go, Ga-Yeon;Kim, Yoon-Ha;Ahn, Taek-Won
    • Journal of Sasang Constitutional Medicine
    • /
    • v.27 no.2
    • /
    • pp.270-287
    • /
    • 2015
  • Objectives To evaluate the neuroprotective effects of modified Yuldahanso-tang (MYH) in a Parkinson's disease mouse model. Methods 1) Four groups (each of 8 rats per group) were used in this study. 2) The neuroprotective effect of MYH was examined in a Parkinson's disease mouse model. C57BL/6 mice treated with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP, 30 mg/kg/day), intraperitoneal (i.p.) for 5 days. 3) The brains of 2 mice per group were removed and frozen at $-20^{\circ}C$, and the striatum-substantia nigra part was seperated. The protein volume was measured by Bradford method following Bio-Rad protein analyzing kit. Using mouse/Rat Dopamine ELISA Assay Kit. 4) The brains of 2 mice per group were separated and removed. TH-immunohistochemical was examined in the MPTP-induced Parkinson's disease mice to evaluate the neuroprotective effects of MYH on ST and SNpc. 5) Two mice out of each group were anesthetized and skulls were opened from occipital to frontal direction to take out the brains. The brains added TTC solution for 20 minutes for staining. 6) The water tank used for morris water maze test was filled with $28^{\circ}C$ water, and a round platform of 10cm in diameter was installed for mice to step on. The study was carried out once a day within 30 seconds, keep exercising to step on the platform in the pool. 7) The brains of two mice out of each group were fixed in 10% formaldehyde solution and paraphillin substance was infiltrated. They were fragmented by microtome, and observed under an optical microscope after Hematoxylin & Eosin staining. 8) A round acrylic cylinder with its upper side open was filled with clean water and depressive mouse models were forced to swim for 15 minutes. After 24 hours the animals were put in the same equipment for 5 minutes and were forced to swim. 9) The convenient, simple, and accurate high-performance liquid chromatography (HPLC) method was established for simultaneous determination of Neurotransmitters in MPTP-MYH group. Results 1) MYH possess Dopamine cell protective effect on MPTP-induced injury in striatum and substantia nigra pars compacta. 2) MYH inhibits the loss of tyrosine hydroxylase-immunoreacitive (TH-IR) cells in the striatum and substantia nigra pars compacta on MPTP-induced injury in C57BL/6 mice. 3) MYH possesses improvement effect on MPTP-induced memory deterioration in C57BL/6 mice through the reduction of prolongated Sort of lost time by MPTP injection using the Morris water maze test. 4) MYH possesses hippocampal neuron protective effect on MPTP-induced injury in C57BL/6 mice. 5) MYH possesses improvement effect on MPTP-induced motor behaviour deficits and depression in C57BL/6 mice through the reduction of prolongated losing motion by MPTP injection using the Forced swimming test. 6) MYH increases serotonin product amount on MPTP-induced injury in C57BL/6 mice. Conclusions This experiment suggests that the neuroprotective effect of MYH is mediated by the increase in Dopamin, TH-ir cell, Hippocampus and Serotonin. Furthermore, MYH essential oil may serve as a potential preventive or therapeutic agent regarding Parkinson's disease.

Feasibility of Deep Learning Algorithms for Binary Classification Problems (이진 분류문제에서의 딥러닝 알고리즘의 활용 가능성 평가)

  • Kim, Kitae;Lee, Bomi;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.95-108
    • /
    • 2017
  • Recently, AlphaGo which is Bakuk (Go) artificial intelligence program by Google DeepMind, had a huge victory against Lee Sedol. Many people thought that machines would not be able to win a man in Go games because the number of paths to make a one move is more than the number of atoms in the universe unlike chess, but the result was the opposite to what people predicted. After the match, artificial intelligence technology was focused as a core technology of the fourth industrial revolution and attracted attentions from various application domains. Especially, deep learning technique have been attracted as a core artificial intelligence technology used in the AlphaGo algorithm. The deep learning technique is already being applied to many problems. Especially, it shows good performance in image recognition field. In addition, it shows good performance in high dimensional data area such as voice, image and natural language, which was difficult to get good performance using existing machine learning techniques. However, in contrast, it is difficult to find deep leaning researches on traditional business data and structured data analysis. In this study, we tried to find out whether the deep learning techniques have been studied so far can be used not only for the recognition of high dimensional data but also for the binary classification problem of traditional business data analysis such as customer churn analysis, marketing response prediction, and default prediction. And we compare the performance of the deep learning techniques with that of traditional artificial neural network models. The experimental data in the paper is the telemarketing response data of a bank in Portugal. It has input variables such as age, occupation, loan status, and the number of previous telemarketing and has a binary target variable that records whether the customer intends to open an account or not. In this study, to evaluate the possibility of utilization of deep learning algorithms and techniques in binary classification problem, we compared the performance of various models using CNN, LSTM algorithm and dropout, which are widely used algorithms and techniques in deep learning, with that of MLP models which is a traditional artificial neural network model. However, since all the network design alternatives can not be tested due to the nature of the artificial neural network, the experiment was conducted based on restricted settings on the number of hidden layers, the number of neurons in the hidden layer, the number of output data (filters), and the application conditions of the dropout technique. The F1 Score was used to evaluate the performance of models to show how well the models work to classify the interesting class instead of the overall accuracy. The detail methods for applying each deep learning technique in the experiment is as follows. The CNN algorithm is a method that reads adjacent values from a specific value and recognizes the features, but it does not matter how close the distance of each business data field is because each field is usually independent. In this experiment, we set the filter size of the CNN algorithm as the number of fields to learn the whole characteristics of the data at once, and added a hidden layer to make decision based on the additional features. For the model having two LSTM layers, the input direction of the second layer is put in reversed position with first layer in order to reduce the influence from the position of each field. In the case of the dropout technique, we set the neurons to disappear with a probability of 0.5 for each hidden layer. The experimental results show that the predicted model with the highest F1 score was the CNN model using the dropout technique, and the next best model was the MLP model with two hidden layers using the dropout technique. In this study, we were able to get some findings as the experiment had proceeded. First, models using dropout techniques have a slightly more conservative prediction than those without dropout techniques, and it generally shows better performance in classification. Second, CNN models show better classification performance than MLP models. This is interesting because it has shown good performance in binary classification problems which it rarely have been applied to, as well as in the fields where it's effectiveness has been proven. Third, the LSTM algorithm seems to be unsuitable for binary classification problems because the training time is too long compared to the performance improvement. From these results, we can confirm that some of the deep learning algorithms can be applied to solve business binary classification problems.

A Study of Anomaly Detection for ICT Infrastructure using Conditional Multimodal Autoencoder (ICT 인프라 이상탐지를 위한 조건부 멀티모달 오토인코더에 관한 연구)

  • Shin, Byungjin;Lee, Jonghoon;Han, Sangjin;Park, Choong-Shik
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.57-73
    • /
    • 2021
  • Maintenance and prevention of failure through anomaly detection of ICT infrastructure is becoming important. System monitoring data is multidimensional time series data. When we deal with multidimensional time series data, we have difficulty in considering both characteristics of multidimensional data and characteristics of time series data. When dealing with multidimensional data, correlation between variables should be considered. Existing methods such as probability and linear base, distance base, etc. are degraded due to limitations called the curse of dimensions. In addition, time series data is preprocessed by applying sliding window technique and time series decomposition for self-correlation analysis. These techniques are the cause of increasing the dimension of data, so it is necessary to supplement them. The anomaly detection field is an old research field, and statistical methods and regression analysis were used in the early days. Currently, there are active studies to apply machine learning and artificial neural network technology to this field. Statistically based methods are difficult to apply when data is non-homogeneous, and do not detect local outliers well. The regression analysis method compares the predictive value and the actual value after learning the regression formula based on the parametric statistics and it detects abnormality. Anomaly detection using regression analysis has the disadvantage that the performance is lowered when the model is not solid and the noise or outliers of the data are included. There is a restriction that learning data with noise or outliers should be used. The autoencoder using artificial neural networks is learned to output as similar as possible to input data. It has many advantages compared to existing probability and linear model, cluster analysis, and map learning. It can be applied to data that does not satisfy probability distribution or linear assumption. In addition, it is possible to learn non-mapping without label data for teaching. However, there is a limitation of local outlier identification of multidimensional data in anomaly detection, and there is a problem that the dimension of data is greatly increased due to the characteristics of time series data. In this study, we propose a CMAE (Conditional Multimodal Autoencoder) that enhances the performance of anomaly detection by considering local outliers and time series characteristics. First, we applied Multimodal Autoencoder (MAE) to improve the limitations of local outlier identification of multidimensional data. Multimodals are commonly used to learn different types of inputs, such as voice and image. The different modal shares the bottleneck effect of Autoencoder and it learns correlation. In addition, CAE (Conditional Autoencoder) was used to learn the characteristics of time series data effectively without increasing the dimension of data. In general, conditional input mainly uses category variables, but in this study, time was used as a condition to learn periodicity. The CMAE model proposed in this paper was verified by comparing with the Unimodal Autoencoder (UAE) and Multi-modal Autoencoder (MAE). The restoration performance of Autoencoder for 41 variables was confirmed in the proposed model and the comparison model. The restoration performance is different by variables, and the restoration is normally well operated because the loss value is small for Memory, Disk, and Network modals in all three Autoencoder models. The process modal did not show a significant difference in all three models, and the CPU modal showed excellent performance in CMAE. ROC curve was prepared for the evaluation of anomaly detection performance in the proposed model and the comparison model, and AUC, accuracy, precision, recall, and F1-score were compared. In all indicators, the performance was shown in the order of CMAE, MAE, and AE. Especially, the reproduction rate was 0.9828 for CMAE, which can be confirmed to detect almost most of the abnormalities. The accuracy of the model was also improved and 87.12%, and the F1-score was 0.8883, which is considered to be suitable for anomaly detection. In practical aspect, the proposed model has an additional advantage in addition to performance improvement. The use of techniques such as time series decomposition and sliding windows has the disadvantage of managing unnecessary procedures; and their dimensional increase can cause a decrease in the computational speed in inference.The proposed model has characteristics that are easy to apply to practical tasks such as inference speed and model management.

Social Network-based Hybrid Collaborative Filtering using Genetic Algorithms (유전자 알고리즘을 활용한 소셜네트워크 기반 하이브리드 협업필터링)

  • Noh, Heeryong;Choi, Seulbi;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.19-38
    • /
    • 2017
  • Collaborative filtering (CF) algorithm has been popularly used for implementing recommender systems. Until now, there have been many prior studies to improve the accuracy of CF. Among them, some recent studies adopt 'hybrid recommendation approach', which enhances the performance of conventional CF by using additional information. In this research, we propose a new hybrid recommender system which fuses CF and the results from the social network analysis on trust and distrust relationship networks among users to enhance prediction accuracy. The proposed algorithm of our study is based on memory-based CF. But, when calculating the similarity between users in CF, our proposed algorithm considers not only the correlation of the users' numeric rating patterns, but also the users' in-degree centrality values derived from trust and distrust relationship networks. In specific, it is designed to amplify the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the trust relationship network. Also, it attenuates the similarity between a target user and his or her neighbor when the neighbor has higher in-degree centrality in the distrust relationship network. Our proposed algorithm considers four (4) types of user relationships - direct trust, indirect trust, direct distrust, and indirect distrust - in total. And, it uses four adjusting coefficients, which adjusts the level of amplification / attenuation for in-degree centrality values derived from direct / indirect trust and distrust relationship networks. To determine optimal adjusting coefficients, genetic algorithms (GA) has been adopted. Under this background, we named our proposed algorithm as SNACF-GA (Social Network Analysis - based CF using GA). To validate the performance of the SNACF-GA, we used a real-world data set which is called 'Extended Epinions dataset' provided by 'trustlet.org'. It is the data set contains user responses (rating scores and reviews) after purchasing specific items (e.g. car, movie, music, book) as well as trust / distrust relationship information indicating whom to trust or distrust between users. The experimental system was basically developed using Microsoft Visual Basic for Applications (VBA), but we also used UCINET 6 for calculating the in-degree centrality of trust / distrust relationship networks. In addition, we used Palisade Software's Evolver, which is a commercial software implements genetic algorithm. To examine the effectiveness of our proposed system more precisely, we adopted two comparison models. The first comparison model is conventional CF. It only uses users' explicit numeric ratings when calculating the similarities between users. That is, it does not consider trust / distrust relationship between users at all. The second comparison model is SNACF (Social Network Analysis - based CF). SNACF differs from the proposed algorithm SNACF-GA in that it considers only direct trust / distrust relationships. It also does not use GA optimization. The performances of the proposed algorithm and comparison models were evaluated by using average MAE (mean absolute error). Experimental result showed that the optimal adjusting coefficients for direct trust, indirect trust, direct distrust, indirect distrust were 0, 1.4287, 1.5, 0.4615 each. This implies that distrust relationships between users are more important than trust ones in recommender systems. From the perspective of recommendation accuracy, SNACF-GA (Avg. MAE = 0.111943), the proposed algorithm which reflects both direct and indirect trust / distrust relationships information, was found to greatly outperform a conventional CF (Avg. MAE = 0.112638). Also, the algorithm showed better recommendation accuracy than the SNACF (Avg. MAE = 0.112209). To confirm whether these differences are statistically significant or not, we applied paired samples t-test. The results from the paired samples t-test presented that the difference between SNACF-GA and conventional CF was statistical significant at the 1% significance level, and the difference between SNACF-GA and SNACF was statistical significant at the 5%. Our study found that the trust/distrust relationship can be important information for improving performance of recommendation algorithms. Especially, distrust relationship information was found to have a greater impact on the performance improvement of CF. This implies that we need to have more attention on distrust (negative) relationships rather than trust (positive) ones when tracking and managing social relationships between users.

Functional recovery after transplantation of mouse bone marrow-derived mesenchymal stem cells for hypoxic-ischemic brain injury in immature rats (저산소 허혈 뇌 손상을 유발시킨 미성숙 흰쥐에서 마우스 골수 기원 중간엽 줄기 세포 이식 후 기능 회복)

  • Choi, Wooksun;Shin, Hye Kyung;Eun, So-Hee;Kang, Hoon Chul;Park, Sung Won;Yoo, Kee Hwan;Hong, Young Sook;Lee, Joo Won;Eun, Baik-Lin
    • Clinical and Experimental Pediatrics
    • /
    • v.52 no.7
    • /
    • pp.824-831
    • /
    • 2009
  • Purpose : We aimed to investigate the efficacy of and functional recovery after intracerebral transplantation of different doses of mouse mesenchymal stem cells (mMSCs) in immature rat brain with hypoxic-ischemic encephalopathy (HIE). Methods : Postnatal 7-days-old Sprague-Dawley rats, which had undergone unilateral HI operation, were given stereotaxic intracerebral injections of either vehicle or mMSCs and then tested for locomotory activity in the 2nd, 4th, 6th, and 8th week of the stem cell injection. In the 8th week, Morris water maze test was performed to evaluate the learning and memory dysfunction for a week. Results : In the open field test, no differences were observed in the total distance/the total duration (F=0.412, P=0.745) among the 4 study groups. In the invisible-platform Morris water maze test, significant differences were observed in escape latency (F=380.319, P<0.01) among the 4 groups. The escape latency in the control group significantly differed from that in the high-dose mMSC and/or sham group on training days 2-5 (Scheffe's test, P<0.05) and became prominent with time progression (F=6.034, P<0.01). In spatial probe trial and visible-platform Morris water maze test, no significant improvement was observed in the rats that had undergone transplantation. Conclusion : Although the rats that received a high dose of mMSCs showed significant recovery in the learning-related behavioral test only, our data support that mMSCs may be used as a valuable source to improve outcome in HIE. Further study is necessary to identify the optimal dose that shows maximal efficacy for HIE treatment.

Ethyl acetate fraction from Pteridium aquilinum ameliorates cognitive impairment in high-fat diet-induced diabetic mice (고지방 식이로 유도된 실험동물의 당뇨성 인지기능 장애에 대한 고사리 아세트산에틸 분획물의 개선효과)

  • Kwon, Bong Seok;Guo, Tian Jiao;Park, Seon Kyeong;Kim, Jong Min;Kang, Jin Yong;Park, Sang Hyun;Kang, Jeong Eun;Lee, Chang Jun;Lee, Uk;Heo, Ho Jin
    • Korean Journal of Food Science and Technology
    • /
    • v.49 no.6
    • /
    • pp.649-658
    • /
    • 2017
  • The potential of the ethyl acetate fraction from Pteridium aquilinum (EFPA) to improve the cognitive function in high-fat diet (HFD)-induced diabetic mice was investigated. EFPA-treatment resulted in a significant improvement in the spatial, learning, and memory abilities compared to the HFD group in behavioral tests, including the Y-maze, passive avoidance, and Morris water maze. The diabetic symptoms of the EFPA-treated groups, such as fasting glucose and glucose tolerance, were alleviated. The administration of EFPA reduced the acetylcholinesterase (AChE) activity and malondialdehyde (MDA) content in mice brains, but increased the acetylcholine (ACh) and superoxide dismutase (SOD) levels. Finally, kaempferol-3-o-glucoside, a major physiological component of EFPA, was identified by using high-performance liquid chromatography coupled with a hybrid triple quadrupole-linear ion trap mass spectrometer (QTRAP LC-MS/MS).