• Title/Summary/Keyword: Generate Data

Search Result 3,066, Processing Time 0.033 seconds

Computational Analysis on Twitter Users' Attitudes towards COVID-19 Policy Intervention

  • Joohee Kim;Yoomi Kim
    • International Journal of Advanced Culture Technology
    • /
    • v.11 no.4
    • /
    • pp.358-377
    • /
    • 2023
  • During the initial period of the COVID-19 pandemic, governments around the world implemented non-pharmaceutical interventions. For these policy interventions to be effective, authorities engaged in the political discourse of legitimising their activity to generate positive public attitudes. To understand effective COVID-19 policy, this study investigates public attitudes in South Korea, the United Kingdom, and the United States and how they reflect different legitimisation of policy intervention. We adopt a big data approach to analyse public attitudes, drawing from public comments posted on Twitter during selected periods. We collect the number of tweets related to COVID-19 policy intervention and conduct a sentiment analysis using a deep learning method. Public attitudes and sentiments in the three countries show different patterns according to how policy interventions were implemented. Overall concern about policy intervention is higher in South Korea than in the other two countries. However, public sentiments in all three countries tend to improve following implementation of policy intervention. The findings suggest that governments can achieve policy effectiveness when consistent and transparent communication take place during the initial period of the pandemic. This study contributes to the existing literature by applying big data analysis to explain which policies engender positive public attitudes.

Time-Series Estimation based AI Algorithm for Energy Management in a Virtual Power Plant System

  • Yeonwoo LEE
    • Korean Journal of Artificial Intelligence
    • /
    • v.12 no.1
    • /
    • pp.17-24
    • /
    • 2024
  • This paper introduces a novel approach to time-series estimation for energy load forecasting within Virtual Power Plant (VPP) systems, leveraging advanced artificial intelligence (AI) algorithms, namely Long Short-Term Memory (LSTM) and Seasonal Autoregressive Integrated Moving Average (SARIMA). Virtual power plants, which integrate diverse microgrids managed by Energy Management Systems (EMS), require precise forecasting techniques to balance energy supply and demand efficiently. The paper introduces a hybrid-method forecasting model combining a parametric-based statistical technique and an AI algorithm. The LSTM algorithm is particularly employed to discern pattern correlations over fixed intervals, crucial for predicting accurate future energy loads. SARIMA is applied to generate time-series forecasts, accounting for non-stationary and seasonal variations. The forecasting model incorporates a broad spectrum of distributed energy resources, including renewable energy sources and conventional power plants. Data spanning a decade, sourced from the Korea Power Exchange (KPX) Electrical Power Statistical Information System (EPSIS), were utilized to validate the model. The proposed hybrid LSTM-SARIMA model with parameter sets (1, 1, 1, 12) and (2, 1, 1, 12) demonstrated a high fidelity to the actual observed data. Thus, it is concluded that the optimized system notably surpasses traditional forecasting methods, indicating that this model offers a viable solution for EMS to enhance short-term load forecasting.

Predicting Learning Achievements with Indicators of Perceived Affordances Based on Different Levels of Content Complexity in Video-based Learning

  • Dasom KIM;Gyeoun JEONG
    • Educational Technology International
    • /
    • v.25 no.1
    • /
    • pp.27-65
    • /
    • 2024
  • The purpose of this study was to identify differences in learning patterns according to content complexity in video-based learning environments and to derive variables that have an important effect on learning achievement within particular learning contexts. To achieve our aims, we observed and collected data on learners' cognitive processes through perceived affordances, using behavioral logs and eye movements as specific indicators. These two types of reaction data were collected from 67 male and female university students who watched two learning videos classified according to their task complexity through the video learning player. The results showed that when the content complexity level was low, learners tended to navigate using other learners' digital logs, but when it was high, students tended to control the learning process and directly generate their own logs. In addition, using derived prediction models according to the degree of content complexity level, we identified the important variables influencing learning achievement in the low content complexity group as those related to video playback and annotation. In comparison, in the high content complexity group, the important variables were related to active navigation of the learning video. This study tried not only to apply the novel variables in the field of educational technology, but also attempt to provide qualitative observations on the learning process based on a quantitative approach.

Research on High-resolution Seafloor Topography Generation using Feature Extraction Algorithm Based on Deep Learning (딥러닝 기반의 특징점 추출 알고리즘을 활용한 고해상도 해저지형 생성기법 연구)

  • Hyun Seung Kim;Jae Deok Jang;Chul Hyun;Sung Kyun Lee
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.20 no.spc1
    • /
    • pp.90-96
    • /
    • 2024
  • In this paper, we propose a technique to model high resolution seafloor topography with 1m intervals using actual water depth data near the east coast of the Korea with 1.6km distance intervals. Using a feature point extraction algorithm that harris corner based on deep learning, the location of the center of seafloor mountain was calculated and the surrounding topology was modeled. The modeled high-resolution seafloor topography based on deep learning was verified within 1.1m mean error between the actual warder dept data. And average error that result of calculating based on deep learning was reduced by 54.4% compared to the case that deep learning was not applied. The proposed algorithm is expected to generate high resolution underwater topology for the entire Korean peninsula and be used to establish a path plan for autonomous navigation of underwater vehicle.

Real-time online damage localisation using vibration measurements of structures under variable environmental conditions

  • K. Lakshmi
    • Smart Structures and Systems
    • /
    • v.33 no.3
    • /
    • pp.227-241
    • /
    • 2024
  • Safety and structural integrity of civil structures, like bridges and buildings, can be substantially enhanced by employing appropriate structural health monitoring (SHM) techniques for timely diagnosis of incipient damages. The information gathered from health monitoring of important infrastructure helps in making informed decisions on their maintenance. This ensures smooth, uninterrupted operation of the civil infrastructure and also cuts down the overall maintenance cost. With an early warning system, SHM can protect human life during major structural failures. A real-time online damage localization technique is proposed using only the vibration measurements in this paper. The concept of the 'Degree of Scatter' (DoS) of the vibration measurements is used to generate a spatial profile, and fractal dimension theory is used for damage detection and localization in the proposed two-phase algorithm. Further, it ensures robustness against environmental and operational variability (EoV). The proposed method works only with output-only responses and does not require correlated finite element models. Investigations are carried out to test the presented algorithm, using the synthetic data generated from a simply supported beam, a 25-storey shear building model, and also experimental data obtained from the lab-level experiments on a steel I-beam and a ten-storey framed structure. The investigations suggest that the proposed damage localization algorithm is capable of isolating the influence of the confounding factors associated with EoV while detecting and localizing damage even with noisy measurements.

Standard operating procedures for the collection, processing, and storage of oral biospecimens at the Korea Oral Biobank Network

  • Young-Dan Cho;Eunae Sandra Cho;Je Seon Song;Young-Youn Kim;Inseong Hwang;Sun-Young Kim
    • Journal of Periodontal and Implant Science
    • /
    • v.53 no.5
    • /
    • pp.336-346
    • /
    • 2023
  • Purpose: The Korea Oral Biobank Network (KOBN) was established in 2021 as a branch of the Korea Biobank Network under the Korea Centers for Disease Control and Prevention to provide infrastructure for the collection, management, storage, and utilization of human bioresources from the oral cavity and associated clinical data for basic research and clinical studies. Methods: To address the need for the unification of the biobanking process, the KOBN organized the concept review for all the processes. Results: The KOBN established standard operating procedures for the collection, processing, and storage of oral samples. Conclusions: The importance of collecting high-quality bioresources to generate accurate and reproducible research results has always been emphasized. A standardized procedure is a basic prerequisite for implementing comprehensive quality management of biological resources and accurate data production.

Analysis of signal cable noise currents in nuclear reactors under high neutron flux irradiation

  • Xiong Wu;Li Cai;Xiangju Zhang;Tingyu Wu;Jieqiong Jiang
    • Nuclear Engineering and Technology
    • /
    • v.55 no.12
    • /
    • pp.4628-4636
    • /
    • 2023
  • Cables are indispensable in nuclear power plants for transmitting data measured by various types of detectors, such as self-powered neutron detectors (SPNDs). These cables will generate disturbing signals that must be accurately distinguished and eliminated. Given that the cable current is not very significant, previous research has focused on SPND, with little attention paid to cable evaluation and validation. This paper specifically focuses on the quantitative analysis of cables and proposes a theoretical model to predict cable noise. In this model, the reaction characteristics between irradiated neutrons and cables were discussed thoroughly. Based on the Monte Carlo method, a comprehensive simulation approach of neutron sensitivity was introduced and long-term irradiation experiments in a heavy water reactor (HWR) were designed to verify this model. The theoretical results of this method agree quite well with the experimental measurements, proving that the model is reliable and exhibits excellent accuracy. The experimental data also show that the cable current accounts for approximately 0.2% of the total current at the initial moment, but as the detector gradually depletes, it will contribute more than 2%, making it a non-negligible proportion of the total signal current.

Morphological analysis of virtual teeth generated by deep learning (딥러닝으로 생성된 가상 치아의 형태학적 분석 연구)

  • Eun-Jeong Bae
    • Journal of Technologic Dentistry
    • /
    • v.46 no.3
    • /
    • pp.93-100
    • /
    • 2024
  • Purpose: This study aimed to generate virtual mandibular first molars using deep learning technology, specifically deep convolutional generative adversarial network (DCGAN), and evaluate the accuracy and reliability of these virtual teeth by analyzing their morphological characteristics. These morphological characteristics were classified based on various evaluation criteria, facilitating the assessment of deep learning-based dental prosthesis production's practical applicability. Methods: Based on our previous research, 1,000 virtual mandibular first molars were generated, and based on morphological criteria, categorized as matching, non-matching, and partially matching. The generated first molars or the categorization of the generated molars were analyzed through the expert judgment of dental technicians. Results: Among the 1,000 generated virtual teeth, 143 (14.3%) met all five evaluation criteria, whereas 76 (7.6%) were judged as completely non-matching. The most frequent issue, with 781 (78.1%) instances, including some overlapping instances, was related to occlusal buccal cusp discrepancies. Conclusion: The study reveals the potential of DCGAN-generated virtual teeth as substitutes for real teeth; however, additional research and improvements in data quality are necessary to enhance accuracy. Continued data collection and refinement of generation methods can maximize the practicality and utility of deep learning-based dental prosthesis production.

Revisiting diaphragmatic hernia of Joseon period Korean mummy by three-dimensional liver and heart segmentation and model reconstruction

  • Ensung Koh;Da Yeong Lee;Dongsoo Yoo;Myeung Ju Kim;In Sun Lee;Jong Ha Hong;Sang Joon Park;Jieun Kim;Soon Chul Cha;Hyejin Lee;Chang Seok Oh;Dong Hoon Shin
    • Anatomy and Cell Biology
    • /
    • v.55 no.4
    • /
    • pp.507-511
    • /
    • 2022
  • A three-dimensional (3D) segmentation and model reconstruction is a specialized tool to reveal spatial interrelationship between multiple internal organs by generating images without overlapping structures. This technique can also be applicable to mummy studies, but related reports have so far been very rare. In this study, we applied 3D segmentation and model reconstruction to computed tomography images of a Korean mummy with congenital diaphragmatic hernia. As originally revealed by the autopsy in 2013, the current 3D reconstruction reveals that the mummy's heart is shifted to the left due to the liver pushing up to thoracic cavity thorough diaphragmatic hernial defect. We can generate 3D images by calling up the data exclusively from mummy's target organs, thus minimizing the confusion of diagnosis that could be caused by overlapping organs.

Development of Market Growth Pattern Map Based on Growth Model and Self-organizing Map Algorithm: Focusing on ICT products (자기조직화 지도를 활용한 성장모형 기반의 시장 성장패턴 지도 구축: ICT제품을 중심으로)

  • Park, Do-Hyung;Chung, Jaekwon;Chung, Yeo Jin;Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.4
    • /
    • pp.1-23
    • /
    • 2014
  • Market forecasting aims to estimate the sales volume of a product or service that is sold to consumers for a specific selling period. From the perspective of the enterprise, accurate market forecasting assists in determining the timing of new product introduction, product design, and establishing production plans and marketing strategies that enable a more efficient decision-making process. Moreover, accurate market forecasting enables governments to efficiently establish a national budget organization. This study aims to generate a market growth curve for ICT (information and communication technology) goods using past time series data; categorize products showing similar growth patterns; understand markets in the industry; and forecast the future outlook of such products. This study suggests the useful and meaningful process (or methodology) to identify the market growth pattern with quantitative growth model and data mining algorithm. The study employs the following methodology. At the first stage, past time series data are collected based on the target products or services of categorized industry. The data, such as the volume of sales and domestic consumption for a specific product or service, are collected from the relevant government ministry, the National Statistical Office, and other relevant government organizations. For collected data that may not be analyzed due to the lack of past data and the alteration of code names, data pre-processing work should be performed. At the second stage of this process, an optimal model for market forecasting should be selected. This model can be varied on the basis of the characteristics of each categorized industry. As this study is focused on the ICT industry, which has more frequent new technology appearances resulting in changes of the market structure, Logistic model, Gompertz model, and Bass model are selected. A hybrid model that combines different models can also be considered. The hybrid model considered for use in this study analyzes the size of the market potential through the Logistic and Gompertz models, and then the figures are used for the Bass model. The third stage of this process is to evaluate which model most accurately explains the data. In order to do this, the parameter should be estimated on the basis of the collected past time series data to generate the models' predictive value and calculate the root-mean squared error (RMSE). The model that shows the lowest average RMSE value for every product type is considered as the best model. At the fourth stage of this process, based on the estimated parameter value generated by the best model, a market growth pattern map is constructed with self-organizing map algorithm. A self-organizing map is learning with market pattern parameters for all products or services as input data, and the products or services are organized into an $N{\times}N$ map. The number of clusters increase from 2 to M, depending on the characteristics of the nodes on the map. The clusters are divided into zones, and the clusters with the ability to provide the most meaningful explanation are selected. Based on the final selection of clusters, the boundaries between the nodes are selected and, ultimately, the market growth pattern map is completed. The last step is to determine the final characteristics of the clusters as well as the market growth curve. The average of the market growth pattern parameters in the clusters is taken to be a representative figure. Using this figure, a growth curve is drawn for each cluster, and their characteristics are analyzed. Also, taking into consideration the product types in each cluster, their characteristics can be qualitatively generated. We expect that the process and system that this paper suggests can be used as a tool for forecasting demand in the ICT and other industries.