• Title/Summary/Keyword: e-Learning process

Search Result 459, Processing Time 0.026 seconds

Suggestion of Urban Regeneration Type Recommendation System Based on Local Characteristics Using Text Mining (텍스트 마이닝을 활용한 지역 특성 기반 도시재생 유형 추천 시스템 제안)

  • Kim, Ikjun;Lee, Junho;Kim, Hyomin;Kang, Juyoung
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.3
    • /
    • pp.149-169
    • /
    • 2020
  • "The Urban Renewal New Deal project", one of the government's major national projects, is about developing underdeveloped areas by investing 50 trillion won in 100 locations on the first year and 500 over the next four years. This project is drawing keen attention from the media and local governments. However, the project model which fails to reflect the original characteristics of the area as it divides project area into five categories: "Our Neighborhood Restoration, Housing Maintenance Support Type, General Neighborhood Type, Central Urban Type, and Economic Base Type," According to keywords for successful urban regeneration in Korea, "resident participation," "regional specialization," "ministerial cooperation" and "public-private cooperation", when local governments propose urban regeneration projects to the government, they can see that it is most important to accurately understand the characteristics of the city and push ahead with the projects in a way that suits the characteristics of the city with the help of local residents and private companies. In addition, considering the gentrification problem, which is one of the side effects of urban regeneration projects, it is important to select and implement urban regeneration types suitable for the characteristics of the area. In order to supplement the limitations of the 'Urban Regeneration New Deal Project' methodology, this study aims to propose a system that recommends urban regeneration types suitable for urban regeneration sites by utilizing various machine learning algorithms, referring to the urban regeneration types of the '2025 Seoul Metropolitan Government Urban Regeneration Strategy Plan' promoted based on regional characteristics. There are four types of urban regeneration in Seoul: "Low-use Low-Level Development, Abandonment, Deteriorated Housing, and Specialization of Historical and Cultural Resources" (Shon and Park, 2017). In order to identify regional characteristics, approximately 100,000 text data were collected for 22 regions where the project was carried out for a total of four types of urban regeneration. Using the collected data, we drew key keywords for each region according to the type of urban regeneration and conducted topic modeling to explore whether there were differences between types. As a result, it was confirmed that a number of topics related to real estate and economy appeared in old residential areas, and in the case of declining and underdeveloped areas, topics reflecting the characteristics of areas where industrial activities were active in the past appeared. In the case of the historical and cultural resource area, since it is an area that contains traces of the past, many keywords related to the government appeared. Therefore, it was possible to confirm political topics and cultural topics resulting from various events. Finally, in the case of low-use and under-developed areas, many topics on real estate and accessibility are emerging, so accessibility is good. It mainly had the characteristics of a region where development is planned or is likely to be developed. Furthermore, a model was implemented that proposes urban regeneration types tailored to regional characteristics for regions other than Seoul. Machine learning technology was used to implement the model, and training data and test data were randomly extracted at an 8:2 ratio and used. In order to compare the performance between various models, the input variables are set in two ways: Count Vector and TF-IDF Vector, and as Classifier, there are 5 types of SVM (Support Vector Machine), Decision Tree, Random Forest, Logistic Regression, and Gradient Boosting. By applying it, performance comparison for a total of 10 models was conducted. The model with the highest performance was the Gradient Boosting method using TF-IDF Vector input data, and the accuracy was 97%. Therefore, the recommendation system proposed in this study is expected to recommend urban regeneration types based on the regional characteristics of new business sites in the process of carrying out urban regeneration projects."

User Access Patterns Discovery based on Apriori Algorithm under Web Logs (웹 로그에서의 Apriori 알고리즘 기반 사용자 액세스 패턴 발견)

  • Ran, Cong-Lin;Joung, Suck-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.681-689
    • /
    • 2019
  • Web usage pattern discovery is an advanced means by using web log data, and it's also a specific application of data mining technology in Web log data mining. In education Data Mining (DM) is the application of Data Mining techniques to educational data (such as Web logs of University, e-learning, adaptive hypermedia and intelligent tutoring systems, etc.), and so, its objective is to analyze these types of data in order to resolve educational research issues. In this paper, the Web log data of a university are used as the research object of data mining. With using the database OLAP technology the Web log data are preprocessed into the data format that can be used for data mining, and the processing results are stored into the MSSQL. At the same time the basic data statistics and analysis are completed based on the processed Web log records. In addition, we introduced the Apriori Algorithm of Web usage pattern mining and its implementation process, developed the Apriori Algorithm program in Python development environment, then gave the performance of the Apriori Algorithm and realized the mining of Web user access pattern. The results have important theoretical significance for the application of the patterns in the development of teaching systems. The next research is to explore the improvement of the Apriori Algorithm in the distributed computing environment.

K-means clustering analysis and differential protection policy according to 3D NAND flash memory error rate to improve SSD reliability

  • Son, Seung-Woo;Kim, Jae-Ho
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.11
    • /
    • pp.1-9
    • /
    • 2021
  • 3D-NAND flash memory provides high capacity per unit area by stacking 2D-NAND cells having a planar structure. However, due to the nature of the lamination process, there is a problem that the frequency of error occurrence may vary depending on each layer or physical cell location. This phenomenon becomes more pronounced as the number of write/erase(P/E) operations of the flash memory increases. Most flash-based storage devices such as SSDs use ECC for error correction. Since this method provides a fixed strength of data protection for all flash memory pages, it has limitations in 3D NAND flash memory, where the error rate varies depending on the physical location. Therefore, in this paper, pages and layers with different error rates are classified into clusters through the K-means machine learning algorithm, and differentiated data protection strength is applied to each cluster. We classify pages and layers based on the number of errors measured after endurance test, where the error rate varies significantly for each page and layer, and add parity data to stripes for areas vulnerable to errors to provides differentiate data protection strength. We show the possibility that this differentiated data protection policy can contribute to the improvement of reliability and lifespan of 3D NAND flash memory compared to the protection techniques using RAID-like or ECC alone.

A Study on the Educational Meaning of eXplainable Artificial Intelligence for Elementary Artificial Intelligence Education (초등 인공지능 교육을 위한 설명 가능한 인공지능의 교육적 의미 연구)

  • Park, Dabin;Shin, Seungki
    • Journal of The Korean Association of Information Education
    • /
    • v.25 no.5
    • /
    • pp.803-812
    • /
    • 2021
  • This study explored the concept of artificial intelligence and the problem-solving process that can be explained through literature research. Through this study, the educational meaning and application plan of artificial intelligence that can be explained were presented. XAI education is a human-centered artificial intelligence education that deals with human-related artificial intelligence problems, and students can cultivate problem-solving skills. In addition, through algorithmic education, it is possible to understand the principles of artificial intelligence, explain artificial intelligence models related to real-life problem situations, and expand to the field of application of artificial intelligence. In order for such XAI education to be applied in elementary schools, examples related to real world must be used, and it is recommended to utilize those that the algorithm itself has interpretability. In addition, various teaching and learning methods and tools should be used for understanding to move toward explanation. Ahead of the introduction of artificial intelligence in the revised curriculum in 2022, we hope that this study will be meaningfully used as the basis for actual classes.

Research Status of Satellite-based Evapotranspiration and Soil Moisture Estimations in South Korea (위성기반 증발산량 및 토양수분량 산정 국내 연구동향)

  • Choi, Ga-young;Cho, Younghyun
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1141-1180
    • /
    • 2022
  • The application of satellite imageries has increased in the field of hydrology and water resources in recent years. However, challenges have been encountered on obtaining accurate evapotranspiration and soil moisture. Therefore, present researches have emphasized the necessity to obtain estimations of satellite-based evapotranspiration and soil moisture with related development researches. In this study, we presented the research status in Korea by investigating the current trends and methodologies for evapotranspiration and soil moisture. As a result of examining the detailed methodologies, we have ascertained that, in general, evapotranspiration is estimated using Energy balance models, such as Surface Energy Balance Algorithm for Land (SEBAL) and Mapping Evapotranspiration with Internalized Calibration (METRIC). In addition, Penman-Monteith and Priestley-Taylor equations are also used to estimate evapotranspiration. In the case of soil moisture, in general, active (AMSR-E, AMSR2, MIRAS, and SMAP) and passive (ASCAT and SAR)sensors are used for estimation. In terms of statistics, deep learning, as well as linear regression equations and artificial neural networks, are used for estimating these parameters. There were a number of research cases in which various indices were calculated using satellite-based data and applied to the characterization of drought. In some cases, hydrological cycle factors of evapotranspiration and soil moisture were calculated based on the Land Surface Model (LSM). Through this process, by comparing, reviewing, and presenting major detailed methodologies, we intend to use these references in related research, and lay the foundation for the advancement of researches on the calculation of satellite-based hydrological cycle data in the future.

Automatic gasometer reading system using selective optical character recognition (관심 문자열 인식 기술을 이용한 가스계량기 자동 검침 시스템)

  • Lee, Kyohyuk;Kim, Taeyeon;Kim, Wooju
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.2
    • /
    • pp.1-25
    • /
    • 2020
  • In this paper, we suggest an application system architecture which provides accurate, fast and efficient automatic gasometer reading function. The system captures gasometer image using mobile device camera, transmits the image to a cloud server on top of private LTE network, and analyzes the image to extract character information of device ID and gas usage amount by selective optical character recognition based on deep learning technology. In general, there are many types of character in an image and optical character recognition technology extracts all character information in an image. But some applications need to ignore non-of-interest types of character and only have to focus on some specific types of characters. For an example of the application, automatic gasometer reading system only need to extract device ID and gas usage amount character information from gasometer images to send bill to users. Non-of-interest character strings, such as device type, manufacturer, manufacturing date, specification and etc., are not valuable information to the application. Thus, the application have to analyze point of interest region and specific types of characters to extract valuable information only. We adopted CNN (Convolutional Neural Network) based object detection and CRNN (Convolutional Recurrent Neural Network) technology for selective optical character recognition which only analyze point of interest region for selective character information extraction. We build up 3 neural networks for the application system. The first is a convolutional neural network which detects point of interest region of gas usage amount and device ID information character strings, the second is another convolutional neural network which transforms spatial information of point of interest region to spatial sequential feature vectors, and the third is bi-directional long short term memory network which converts spatial sequential information to character strings using time-series analysis mapping from feature vectors to character strings. In this research, point of interest character strings are device ID and gas usage amount. Device ID consists of 12 arabic character strings and gas usage amount consists of 4 ~ 5 arabic character strings. All system components are implemented in Amazon Web Service Cloud with Intel Zeon E5-2686 v4 CPU and NVidia TESLA V100 GPU. The system architecture adopts master-lave processing structure for efficient and fast parallel processing coping with about 700,000 requests per day. Mobile device captures gasometer image and transmits to master process in AWS cloud. Master process runs on Intel Zeon CPU and pushes reading request from mobile device to an input queue with FIFO (First In First Out) structure. Slave process consists of 3 types of deep neural networks which conduct character recognition process and runs on NVidia GPU module. Slave process is always polling the input queue to get recognition request. If there are some requests from master process in the input queue, slave process converts the image in the input queue to device ID character string, gas usage amount character string and position information of the strings, returns the information to output queue, and switch to idle mode to poll the input queue. Master process gets final information form the output queue and delivers the information to the mobile device. We used total 27,120 gasometer images for training, validation and testing of 3 types of deep neural network. 22,985 images were used for training and validation, 4,135 images were used for testing. We randomly splitted 22,985 images with 8:2 ratio for training and validation respectively for each training epoch. 4,135 test image were categorized into 5 types (Normal, noise, reflex, scale and slant). Normal data is clean image data, noise means image with noise signal, relfex means image with light reflection in gasometer region, scale means images with small object size due to long-distance capturing and slant means images which is not horizontally flat. Final character string recognition accuracies for device ID and gas usage amount of normal data are 0.960 and 0.864 respectively.

Development of Curriculum for the Emergency Clinical Nurse Specialist (응급전문간호사의 교육과정안 개발)

  • 김광주;이향련;김귀분
    • Journal of Korean Academy of Nursing
    • /
    • v.26 no.1
    • /
    • pp.194-222
    • /
    • 1996
  • Various accidents and injuries are currently occurring in Korea at increasingly high rates. Good quality emergency care service is urgently needed to cope with these various forms of accidents and injuries. In order to develop a sound emergency care system, there need to be a plan to educate and train professionals specifically in emergency care. One solution for the on going problem would be to educate and train emergency clinical nurse specialists. This study on a strategy for curriculum development for emergency clinical nurse specialist was based on the following five content areas, developed from literature related to the curriculum of emergency nursing and emergency care situation : 1. Nurses working in the emergency rooms of three university hospitals were analyzed for six days to identify categories of nursing activities. 2. Two hundreds and eleven nurses working in the emergency rooms of 12 university hospitals were surveyed to identify needs for educational content that should be included in a curriculum for the clinical nurse specialist. 3. Examination of the environment in which emergency management was provided. 4. Identification of characteristics of patients in the emergency room. 5. The role of emergency clinical nurse specialist was identified through literature, recent data, and research materials. The following curriculum was formulated using the above mentioned process. 1. The philosophy of education for emergency clinical nurse specialist was established through a realistic philosophical framework. In this frame, client, environment, health, nursing, and learning have been defined. 2. The purpose of education is framed on individual development, social structure, nursing process and responsibility along with the role and function of the emergency clinical nurse specialist. 3. The central theme was based on human, environment, health and nursing. 4. The elements of structure in the curriculum content were divided to include two major threads, I, e., vertical and horizontal : The vertical thread to consist of the client, life cycle, education, research, leadership and consultation, and the horizontal thread to consist of level of nursing (prevention to rehabilitation), and health to illness based on the health care system developed by Betty Neuman system model. 5. Behavioral objectives for education were structured according to the emergency clinical nurse specialist role and function as a master degree prepared in various emergency settings. 6. The content of the curriculum consisted of three core courses(9 credits), five major courses(15 credits), six elective courses(12 credits) and six prerequisite courses (12 credits). Thus 48 credits are required. Recommendations : 1. To promote tile quality of the emergency care system, the number of emergency professionals, has to be expanded. Further the role and function of the emergency clinical nurse specialist needs to be specified in both the medical law and the Nursing Practice Act. 2. In order to upgrade the qualification of emergency clinical nurse specialists, the course should be given as part of the graduate Program. 3. Certification should be issued through the Korean Nurses Association.

  • PDF

Estimation of GARCH Models and Performance Analysis of Volatility Trading System using Support Vector Regression (Support Vector Regression을 이용한 GARCH 모형의 추정과 투자전략의 성과분석)

  • Kim, Sun Woong;Choi, Heung Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.2
    • /
    • pp.107-122
    • /
    • 2017
  • Volatility in the stock market returns is a measure of investment risk. It plays a central role in portfolio optimization, asset pricing and risk management as well as most theoretical financial models. Engle(1982) presented a pioneering paper on the stock market volatility that explains the time-variant characteristics embedded in the stock market return volatility. His model, Autoregressive Conditional Heteroscedasticity (ARCH), was generalized by Bollerslev(1986) as GARCH models. Empirical studies have shown that GARCH models describes well the fat-tailed return distributions and volatility clustering phenomenon appearing in stock prices. The parameters of the GARCH models are generally estimated by the maximum likelihood estimation (MLE) based on the standard normal density. But, since 1987 Black Monday, the stock market prices have become very complex and shown a lot of noisy terms. Recent studies start to apply artificial intelligent approach in estimating the GARCH parameters as a substitute for the MLE. The paper presents SVR-based GARCH process and compares with MLE-based GARCH process to estimate the parameters of GARCH models which are known to well forecast stock market volatility. Kernel functions used in SVR estimation process are linear, polynomial and radial. We analyzed the suggested models with KOSPI 200 Index. This index is constituted by 200 blue chip stocks listed in the Korea Exchange. We sampled KOSPI 200 daily closing values from 2010 to 2015. Sample observations are 1487 days. We used 1187 days to train the suggested GARCH models and the remaining 300 days were used as testing data. First, symmetric and asymmetric GARCH models are estimated by MLE. We forecasted KOSPI 200 Index return volatility and the statistical metric MSE shows better results for the asymmetric GARCH models such as E-GARCH or GJR-GARCH. This is consistent with the documented non-normal return distribution characteristics with fat-tail and leptokurtosis. Compared with MLE estimation process, SVR-based GARCH models outperform the MLE methodology in KOSPI 200 Index return volatility forecasting. Polynomial kernel function shows exceptionally lower forecasting accuracy. We suggested Intelligent Volatility Trading System (IVTS) that utilizes the forecasted volatility results. IVTS entry rules are as follows. If forecasted tomorrow volatility will increase then buy volatility today. If forecasted tomorrow volatility will decrease then sell volatility today. If forecasted volatility direction does not change we hold the existing buy or sell positions. IVTS is assumed to buy and sell historical volatility values. This is somewhat unreal because we cannot trade historical volatility values themselves. But our simulation results are meaningful since the Korea Exchange introduced volatility futures contract that traders can trade since November 2014. The trading systems with SVR-based GARCH models show higher returns than MLE-based GARCH in the testing period. And trading profitable percentages of MLE-based GARCH IVTS models range from 47.5% to 50.0%, trading profitable percentages of SVR-based GARCH IVTS models range from 51.8% to 59.7%. MLE-based symmetric S-GARCH shows +150.2% return and SVR-based symmetric S-GARCH shows +526.4% return. MLE-based asymmetric E-GARCH shows -72% return and SVR-based asymmetric E-GARCH shows +245.6% return. MLE-based asymmetric GJR-GARCH shows -98.7% return and SVR-based asymmetric GJR-GARCH shows +126.3% return. Linear kernel function shows higher trading returns than radial kernel function. Best performance of SVR-based IVTS is +526.4% and that of MLE-based IVTS is +150.2%. SVR-based GARCH IVTS shows higher trading frequency. This study has some limitations. Our models are solely based on SVR. Other artificial intelligence models are needed to search for better performance. We do not consider costs incurred in the trading process including brokerage commissions and slippage costs. IVTS trading performance is unreal since we use historical volatility values as trading objects. The exact forecasting of stock market volatility is essential in the real trading as well as asset pricing models. Further studies on other machine learning-based GARCH models can give better information for the stock market investors.

A Case Study on the Growth of Learners through the Changemaker TEMPS Program (체인지메이커(Changemaker) TEMPS 프로그램을 통한 학습자의 성장에 대한 사례연구)

  • Kim, Nam Eun;Heo, Young Sun
    • Journal of Korean Home Economics Education Association
    • /
    • v.31 no.3
    • /
    • pp.91-116
    • /
    • 2019
  • The purpose of this study is to examine the meaning of Changemaker education and to investigate the significance of Changemaker education in home economics education through a study of growth of learners applying the TEMPS program. To this end, first, the concept of Changemaker education was defined. Changemaker education is an education that changes society in a positive direction through a process of thinking about, learning about, making, and participating(playing) in various problems that we face in real life and drawing out solutions and share he solutions with others. Second, in this reasearch, the direction of Changemaker education is to make them interested in social problems and solve it and to make both the family and the career life happy and healthy by collaborating with other people. The scope of the contents is defined as "the selection of the content elements of the five domains of the child family, diet nutrition, clothing, housing and consumer life". As a way of teaching, we suggested that the TEMPS phase is followed so that the session purpose is achieved. Third, the Changemaker program consists of five steps of TEMPS among the five key ideas of Changemaker education. T(Thinking) is the step of understanding the problem and thinking about how to solve it, and E(Education) is getting the background for the next step. M(Making) is a step to create a target for problem solving, and P(Participation) and P(Play) are steps to Participation and enjoy. S(Share) is a step of changing the society through the result display, SNS sharing, and class presentation. In this study, 12 programs for middle school and 15 programs for high school were developed on the basis of TEMPS level. Each of the programs consists of 2 to 12 unit hours, which add up to 68 hours in the middle school program and 68 in high school. The learners who participated in the Changemaker program for one year (March 2, 2018~December 31, 2018) will experience improvement in many aspects including the linkage of life and education, practical ability, self-directed learning, self-esteem, sense of achievement and self-reflection, sensory observation, and so on.

Measuring the Public Service Quality Using Process Mining: Focusing on N City's Building Licensing Complaint Service (프로세스 마이닝을 이용한 공공서비스의 품질 측정: N시의 건축 인허가 민원 서비스를 중심으로)

  • Lee, Jung Seung
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.35-52
    • /
    • 2019
  • As public services are provided in various forms, including e-government, the level of public demand for public service quality is increasing. Although continuous measurement and improvement of the quality of public services is needed to improve the quality of public services, traditional surveys are costly and time-consuming and have limitations. Therefore, there is a need for an analytical technique that can measure the quality of public services quickly and accurately at any time based on the data generated from public services. In this study, we analyzed the quality of public services based on data using process mining techniques for civil licensing services in N city. It is because the N city's building license complaint service can secure data necessary for analysis and can be spread to other institutions through public service quality management. This study conducted process mining on a total of 3678 building license complaint services in N city for two years from January 2014, and identified process maps and departments with high frequency and long processing time. According to the analysis results, there was a case where a department was crowded or relatively few at a certain point in time. In addition, there was a reasonable doubt that the increase in the number of complaints would increase the time required to complete the complaints. According to the analysis results, the time required to complete the complaint was varied from the same day to a year and 146 days. The cumulative frequency of the top four departments of the Sewage Treatment Division, the Waterworks Division, the Urban Design Division, and the Green Growth Division exceeded 50% and the cumulative frequency of the top nine departments exceeded 70%. Higher departments were limited and there was a great deal of unbalanced load among departments. Most complaint services have a variety of different patterns of processes. Research shows that the number of 'complementary' decisions has the greatest impact on the length of a complaint. This is interpreted as a lengthy period until the completion of the entire complaint is required because the 'complement' decision requires a physical period in which the complainant supplements and submits the documents again. In order to solve these problems, it is possible to drastically reduce the overall processing time of the complaints by preparing thoroughly before the filing of the complaints or in the preparation of the complaints, or the 'complementary' decision of other complaints. By clarifying and disclosing the cause and solution of one of the important data in the system, it helps the complainant to prepare in advance and convinces that the documents prepared by the public information will be passed. The transparency of complaints can be sufficiently predictable. Documents prepared by pre-disclosed information are likely to be processed without problems, which not only shortens the processing period but also improves work efficiency by eliminating the need for renegotiation or multiple tasks from the point of view of the processor. The results of this study can be used to find departments with high burdens of civil complaints at certain points of time and to flexibly manage the workforce allocation between departments. In addition, as a result of analyzing the pattern of the departments participating in the consultation by the characteristics of the complaints, it is possible to use it for automation or recommendation when requesting the consultation department. In addition, by using various data generated during the complaint process and using machine learning techniques, the pattern of the complaint process can be found. It can be used for automation / intelligence of civil complaint processing by making this algorithm and applying it to the system. This study is expected to be used to suggest future public service quality improvement through process mining analysis on civil service.