• Title/Summary/Keyword: data memory

Search Result 3,343, Processing Time 0.033 seconds

Effects of a High-Intensity Interval Physical Exercise Program on Cognition, Physical Performance, and Electroencephalogram Patterns in Korean Elderly People: A Pilot Study

  • Sun Min Lee;Muncheong Choi;Buong-O Chun;Kyunghwa Sun;Ki Sub Kim;Seung Wan Kang;Hong-Sun Song;So Young Moon
    • Dementia and Neurocognitive Disorders
    • /
    • v.21 no.3
    • /
    • pp.93-102
    • /
    • 2022
  • Background and Purpose: The effects of high-intensity interval training (HIIT) interventions on functional brain changes in older adults remain unclear. This preliminary study aimed to explore the effect of physical exercise intervention (PEI), including HIIT, on cognitive function, physical performance, and electroencephalogram patterns in Korean elderly people. Methods: We enrolled six non-dementia participants aged >65 years from a community health center. PEI was conducted at the community health center for 4 weeks, three times/week, and 50 min/day. PEI, including HIIT, involved aerobic exercise, resistance training (muscle strength), flexibility, and balance. Wilcoxon signed rank test was used for data analysis. Results: After the PEI, there was improvement in the 30-second sit-to-stand test result (16.2±7.0 times vs. 24.8±5.5 times, p=0.027), 2-minute stationary march result (98.3±27.2 times vs. 143.7±36.9 times, p=0.027), T-wall response time (104.2±55.8 seconds vs.71.0±19.4 seconds, p=0.028), memory score (89.6±21.6 vs. 111.0±19.1, p=0.028), executive function score (33.3±5.3 vs. 37.0±5.1, p=0.046), and total Literacy Independent Cognitive Assessment score (214.6±30.6 vs. 241.6±22.8, p=0.028). Electroencephalography demonstrated that the beta power in the frontal region was increased, while the theta power in the temporal region was decreased (all p<0.05). Conclusions: Our HIIT PEI program effectively improved cognitive function, physical fitness, and electroencephalographic markers in elderly individuals; thus, it could be beneficial for improving functional brain activity in this population.

Development of a Real-Time Mobile GIS using the HBR-Tree (HBR-Tree를 이용한 실시간 모바일 GIS의 개발)

  • Lee, Ki-Yamg;Yun, Jae-Kwan;Han, Ki-Joon
    • Journal of Korea Spatial Information System Society
    • /
    • v.6 no.1 s.11
    • /
    • pp.73-85
    • /
    • 2004
  • Recently, as the growth of the wireless Internet, PDA and HPC, the focus of research and development related with GIS(Geographic Information System) has been changed to the Real-Time Mobile GIS to service LBS. To offer LBS efficiently, there must be the Real-Time GIS platform that can deal with dynamic status of moving objects and a location index which can deal with the characteristics of location data. Location data can use the same data type(e.g., point) of GIS, but the management of location data is very different. Therefore, in this paper, we studied the Real-Time Mobile GIS using the HBR-tree to manage mass of location data efficiently. The Real-Time Mobile GIS which is developed in this paper consists of the HBR-tree and the Real-Time GIS Platform HBR-tree. we proposed in this paper, is a combined index type of the R-tree and the spatial hash Although location data are updated frequently, update operations are done within the same hash table in the HBR-tree, so it costs less than other tree-based indexes Since the HBR-tree uses the same search mechanism of the R-tree, it is possible to search location data quickly. The Real-Time GIS platform consists of a Real-Time GIS engine that is extended from a main memory database system. a middleware which can transfer spatial, aspatial data to clients and receive location data from clients, and a mobile client which operates on the mobile devices. Especially, this paper described the performance evaluation conducted with practical tests if the HBR-tree and the Real-Time GIS engine respectively.

  • PDF

EEG based Cognitive Load Measurement for e-learning Application (이러닝 적용을 위한 뇌파기반 인지부하 측정)

  • Kim, Jun;Song, Ki-Sang
    • Korean Journal of Cognitive Science
    • /
    • v.20 no.2
    • /
    • pp.125-154
    • /
    • 2009
  • This paper describes the possibility of human physiological data, especially brain-wave activity, to detect cognitive overload, a phenomenon that may occur while learner uses an e-learning system. If it is found that cognitive overload to be detectable, providing appropriate feedback to learners may be possible. To illustrate the possibility, while engaging in cognitive activities, cognitive load levels were measured by EEG (electroencephalogram) to seek detection of cognitive overload. The task given to learner was a computerized listening and recall test designed to measure working memory capacity, and the test had four progressively increasing degrees of difficulty. Eight male, right-handed, university students were asked to answer 4 sets of tests and each test took from 61 seconds to 198 seconds. A correction ratio was then calculated and EEG results analyzed. The correction ratio of listening and recall tests were 84.5%, 90.6%, 62.5% and 56.3% respectively, and the degree of difficulty had statistical significance. The data highlighted learner cognitive overload on test level of 3 and 4, the higher level tests. Second, the SEF-95% value was greater on test3 and 4 than on tests 1 and 2 indicating that tests 3 and 4 imposed greater cognitive load on participants. Third, the relative power of EEG gamma wave rapidly increased on the 3rd and $4^{th}$ test, and signals from channel F3, F4, C4, F7, and F8 showed statistically significance. These five channels are surrounding the brain's Broca area, and from a brain mapping analysis it was found that F8, right-half of the brain area, was activated relative to the degree of difficulty. Lastly, cross relation analysis showed greater increasing in synchronization at test3 and $4^{th}$ at test1 and 2. From these findings, it is possible to measure brain cognitive load level and cognitive over load via brain activity, which may provide atimely feedback scheme for e-learning systems.

  • PDF

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

The study of stereoscopic editing process with applying depth information (깊이정보를 활용한 입체 편집 프로세스 연구)

  • Baek, Kwang-Ho;Kim, Min-Seo;Han, Myung-Hee
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.225-233
    • /
    • 2012
  • The 3D stereoscopic image contents have been emerging as the blue chip of the contents market of the next generation since the . However, all the 3D contents created commercially in the country have failed to enter box office. It is because the quality of Korean 3D contents is much lower than that of overseas contents and also current 3D post production process is based on 2D. Considering all these facts, the 3D editing process has connection with the quality of contents. The current 3D editing processes of the production case of are using the way that edits with the system on basis of 2D, followed by checking with 3D display system and modifying, if there are any problems. In order to improve those conditions, I suggest that the 3D editing process contain more objectivity by visualizing the depth data applied in some composition work such as Disparity map, Depth map, and the current 3D editing process. The proposed process has been used in the music drama , comparing with those of the film . The 3D values could be checked among cuts which have been changed a lot since those of , while the 3D value of drew an equal result in general. Since the current process is based on an artist's subjective sense of 3D, it could be changed according to the condition and state of the artist. Furthermore, it is impossible for us to predict the positive range, so it is apprehended that the cubic effect of space might be perverted by showing each different 3D value according to cuts in the same space or a limited space. On the other hand, the objective 3D editing by applying the visualization of depth data can adjust itself to the cubic effect of the same space and the whole content equally, which will enrich the 3D contents. It will even be able to solve some problems such as distortion of cubic effect and visual fatigue, etc.

Incremental Ensemble Learning for The Combination of Multiple Models of Locally Weighted Regression Using Genetic Algorithm (유전 알고리즘을 이용한 국소가중회귀의 다중모델 결합을 위한 점진적 앙상블 학습)

  • Kim, Sang Hun;Chung, Byung Hee;Lee, Gun Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.9
    • /
    • pp.351-360
    • /
    • 2018
  • The LWR (Locally Weighted Regression) model, which is traditionally a lazy learning model, is designed to obtain the solution of the prediction according to the input variable, the query point, and it is a kind of the regression equation in the short interval obtained as a result of the learning that gives a higher weight value closer to the query point. We study on an incremental ensemble learning approach for LWR, a form of lazy learning and memory-based learning. The proposed incremental ensemble learning method of LWR is to sequentially generate and integrate LWR models over time using a genetic algorithm to obtain a solution of a specific query point. The weaknesses of existing LWR models are that multiple LWR models can be generated based on the indicator function and data sample selection, and the quality of the predictions can also vary depending on this model. However, no research has been conducted to solve the problem of selection or combination of multiple LWR models. In this study, after generating the initial LWR model according to the indicator function and the sample data set, we iterate evolution learning process to obtain the proper indicator function and assess the LWR models applied to the other sample data sets to overcome the data set bias. We adopt Eager learning method to generate and store LWR model gradually when data is generated for all sections. In order to obtain a prediction solution at a specific point in time, an LWR model is generated based on newly generated data within a predetermined interval and then combined with existing LWR models in a section using a genetic algorithm. The proposed method shows better results than the method of selecting multiple LWR models using the simple average method. The results of this study are compared with the predicted results using multiple regression analysis by applying the real data such as the amount of traffic per hour in a specific area and hourly sales of a resting place of the highway, etc.

Data collection strategy for building rainfall-runoff LSTM model predicting daily runoff (강수-일유출량 추정 LSTM 모형의 구축을 위한 자료 수집 방안)

  • Kim, Dongkyun;Kang, Seokkoo
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.10
    • /
    • pp.795-805
    • /
    • 2021
  • In this study, after developing an LSTM-based deep learning model for estimating daily runoff in the Soyang River Dam basin, the accuracy of the model for various combinations of model structure and input data was investigated. A model was built based on the database consisting of average daily precipitation, average daily temperature, average daily wind speed (input up to here), and daily average flow rate (output) during the first 12 years (1997.1.1-2008.12.31). The Nash-Sutcliffe Model Efficiency Coefficient (NSE) and RMSE were examined for validation using the flow discharge data of the later 12 years (2009.1.1-2020.12.31). The combination that showed the highest accuracy was the case in which all possible input data (12 years of daily precipitation, weather temperature, wind speed) were used on the LSTM model structure with 64 hidden units. The NSE and RMSE of the verification period were 0.862 and 76.8 m3/s, respectively. When the number of hidden units of LSTM exceeds 500, the performance degradation of the model due to overfitting begins to appear, and when the number of hidden units exceeds 1000, the overfitting problem becomes prominent. A model with very high performance (NSE=0.8~0.84) could be obtained when only 12 years of daily precipitation was used for model training. A model with reasonably high performance (NSE=0.63-0.85) when only one year of input data was used for model training. In particular, an accurate model (NSE=0.85) could be obtained if the one year of training data contains a wide magnitude of flow events such as extreme flow and droughts as well as normal events. If the training data includes both the normal and extreme flow rates, input data that is longer than 5 years did not significantly improve the model performance.

A Proposal for Archives securing Community Memory The Achievements and Limitations of GPH Archives (공동체의 기억을 담는 아카이브를 지향하며 20세기민중생활사연구단 아카이브의 성과와 과제)

  • Kim, Joo-Kwan
    • The Korean Journal of Archival Studies
    • /
    • no.33
    • /
    • pp.85-112
    • /
    • 2012
  • Group for the People without History(GPH) was launched at September 2002 and had worked for around five years with the following purposes; Firstly, GPH collects first-hand data on people's everyday lives based on fieldworks. Secondly, GPH constructs digital archives of the collected data. Thirdly, GPH guarantees the accessibility to the archives for people. And lastly, GPH promotes users to utilize the archived data for the various levels. GPH has influenced on the construction of archives on everyday life history as well as the research areas such as anthropology and social history. What is important is that GPH tried to construct digital archives even before the awareness on archives was not widely spreaded in Korea other than formal sectors. Furthermore, the GPH archives proposed a model of open archives which encouraged the people's participation in and utilization of the archives. GPH also showed the ways in which archived data were used. It had published forty seven books of people's life histories and five photographic books, and held six photographic exhibitions on the basis of the archived data. Though GPH archives had contributed to the ignition of the discussions on archives in various areas as leading civilian archives, it has a few limitations. The most important problem is that the data are vanishing too fast for researchers to collect. It is impossible for researchers to collect the whole data. Secondly, the physical space and hardware for the data storage should be ensured. One of the alternatives to solve the problems revealed in the works of GPH is to construct community archives. Community archives are decentralized archives run by people themselves to preserve their own voices and history. It will guarantee the democratization of archives.

Design and Implementation of MongoDB-based Unstructured Log Processing System over Cloud Computing Environment (클라우드 환경에서 MongoDB 기반의 비정형 로그 처리 시스템 설계 및 구현)

  • Kim, Myoungjin;Han, Seungho;Cui, Yun;Lee, Hanku
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.71-84
    • /
    • 2013
  • Log data, which record the multitude of information created when operating computer systems, are utilized in many processes, from carrying out computer system inspection and process optimization to providing customized user optimization. In this paper, we propose a MongoDB-based unstructured log processing system in a cloud environment for processing the massive amount of log data of banks. Most of the log data generated during banking operations come from handling a client's business. Therefore, in order to gather, store, categorize, and analyze the log data generated while processing the client's business, a separate log data processing system needs to be established. However, the realization of flexible storage expansion functions for processing a massive amount of unstructured log data and executing a considerable number of functions to categorize and analyze the stored unstructured log data is difficult in existing computer environments. Thus, in this study, we use cloud computing technology to realize a cloud-based log data processing system for processing unstructured log data that are difficult to process using the existing computing infrastructure's analysis tools and management system. The proposed system uses the IaaS (Infrastructure as a Service) cloud environment to provide a flexible expansion of computing resources and includes the ability to flexibly expand resources such as storage space and memory under conditions such as extended storage or rapid increase in log data. Moreover, to overcome the processing limits of the existing analysis tool when a real-time analysis of the aggregated unstructured log data is required, the proposed system includes a Hadoop-based analysis module for quick and reliable parallel-distributed processing of the massive amount of log data. Furthermore, because the HDFS (Hadoop Distributed File System) stores data by generating copies of the block units of the aggregated log data, the proposed system offers automatic restore functions for the system to continually operate after it recovers from a malfunction. Finally, by establishing a distributed database using the NoSQL-based Mongo DB, the proposed system provides methods of effectively processing unstructured log data. Relational databases such as the MySQL databases have complex schemas that are inappropriate for processing unstructured log data. Further, strict schemas like those of relational databases cannot expand nodes in the case wherein the stored data are distributed to various nodes when the amount of data rapidly increases. NoSQL does not provide the complex computations that relational databases may provide but can easily expand the database through node dispersion when the amount of data increases rapidly; it is a non-relational database with an appropriate structure for processing unstructured data. The data models of the NoSQL are usually classified as Key-Value, column-oriented, and document-oriented types. Of these, the representative document-oriented data model, MongoDB, which has a free schema structure, is used in the proposed system. MongoDB is introduced to the proposed system because it makes it easy to process unstructured log data through a flexible schema structure, facilitates flexible node expansion when the amount of data is rapidly increasing, and provides an Auto-Sharding function that automatically expands storage. The proposed system is composed of a log collector module, a log graph generator module, a MongoDB module, a Hadoop-based analysis module, and a MySQL module. When the log data generated over the entire client business process of each bank are sent to the cloud server, the log collector module collects and classifies data according to the type of log data and distributes it to the MongoDB module and the MySQL module. The log graph generator module generates the results of the log analysis of the MongoDB module, Hadoop-based analysis module, and the MySQL module per analysis time and type of the aggregated log data, and provides them to the user through a web interface. Log data that require a real-time log data analysis are stored in the MySQL module and provided real-time by the log graph generator module. The aggregated log data per unit time are stored in the MongoDB module and plotted in a graph according to the user's various analysis conditions. The aggregated log data in the MongoDB module are parallel-distributed and processed by the Hadoop-based analysis module. A comparative evaluation is carried out against a log data processing system that uses only MySQL for inserting log data and estimating query performance; this evaluation proves the proposed system's superiority. Moreover, an optimal chunk size is confirmed through the log data insert performance evaluation of MongoDB for various chunk sizes.

Design of Low-Noise and High-Reliability Differential Paired eFuse OTP Memory (저잡음 · 고신뢰성 Differential Paired eFuse OTP 메모리 설계)

  • Kim, Min-Sung;Jin, Liyan;Hao, Wenchao;Ha, Pan-Bong;Kim, Young-Hee
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.17 no.10
    • /
    • pp.2359-2368
    • /
    • 2013
  • In this paper, an IRD (internal read data) circuit preventing the reentry into the read mode while keeping the read-out DOUT datum at power-up even if noise such as glitches occurs at signal ports such as an input signal port RD (read) when a power IC is on, is proposed. Also, a pulsed WL (word line) driving method is used to prevent a DC current of several tens of micro amperes from flowing into the read transistor of a differential paired eFuse OTP cell. Thus, reliability is secured by preventing non-blown eFuse links from being blown by the EM (electro-migration). Furthermore, a compared output between a programmed datum and a read-out datum is outputted to the PFb (pass fail bar) pin while performing a sensing margin test with a variable pull-up load in consideration of resistance variation of a programmed eFuse in the program-verify-read mode. The layout size of the 8-bit eFuse OTP IP with a $0.18{\mu}m$ process is $189.625{\mu}m{\times}138.850{\mu}m(=0.0263mm^2)$.