• Title/Summary/Keyword: deep learning strategy

Search Result 139, Processing Time 0.022 seconds

Research of Riemannian Procrustes Analysis on EEG Based SPD-Net (EEG 기반 SPD-Net에서 리만 프로크루스테스 분석에 대한 연구)

  • Isaac Yoon Seock Bang;Byung Hyung Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.4
    • /
    • pp.179-186
    • /
    • 2024
  • This paper investigates the impact of Riemannian Procrustes Analysis (RPA) on enhancing the classification performance of SPD-Net when applied to EEG signals across different sessions and subjects. EEG signals, known for their inherent individual variability, are initially transformed into Symmetric Positive Definite (SPD) matrices, which are naturally represented on a Riemannian manifold. To mitigate the variability between sessions and subjects, we employ RPA, a method that geometrically aligns the statistical distributions of these matrices on the manifold. This alignment is designed to reduce individual differences and improve the accuracy of EEG signal classification. SPD-Net, a deep learning architecture that maintains the Riemannian structure of the data, is then used for classification. We compare its performance with the Minimum Distance to Mean (MDM) classifier, a conventional method rooted in Riemannian geometry. The experimental results demonstrate that incorporating RPA as a preprocessing step enhances the classification accuracy of SPD-Net, validating that the alignment of statistical distributions on the Riemannian manifold is an effective strategy for improving EEG-based BCI systems. These findings suggest that RPA can play a role in addressing individual variability, thereby increasing the robustness and generalization capability of EEG signal classification in practical BCI applications.

Understanding the Artificial Intelligence Business Ecosystem for Digital Transformation: A Multi-actor Network Perspective (디지털 트랜스포메이션을 위한 인공지능 비즈니스 생태계 연구: 다행위자 네트워크 관점에서)

  • Yoon Min Hwang;Sung Won Hong
    • Information Systems Review
    • /
    • v.21 no.4
    • /
    • pp.125-141
    • /
    • 2019
  • With the advent of deep learning technology, which is represented by AlphaGo, artificial intelligence (A.I.) has quickly emerged as a key theme of digital transformation to secure competitive advantage for businesses. In order to understand the trends of A.I. based digital transformation, a clear comprehension of the A.I. business ecosystem should precede. Therefore, this study analyzed the A.I. business ecosystem from the multi-actor network perspective and identified the A.I. platform strategy type. Within internal three layers of A.I. business ecosystem (infrastructure & hardware, software & application, service & data layers), this study identified four types of A.I. platform strategy (Tech. vertical × Biz. horizontal, Tech. vertical × Biz. vertical, Tech. horizontal × Biz. horizontal, Tech. horizontal × Biz. vertical). Then, outside of A.I. platform, this study presented five actors (users, investors, policy makers, consortiums & innovators, CSOs/NGOs) and their roles to support sustainable A.I. business ecosystem in symbiosis with human. This study identified A.I. business ecosystem framework and platform strategy type. The roles of government and academia to create a sustainable A.I. business ecosystem were also suggested. These results will help to find proper strategy direction of A.I. business ecosystem and digital transformation.

Design of Distributed Hadoop Full Stack Platform for Big Data Collection and Processing (빅데이터 수집 처리를 위한 분산 하둡 풀스택 플랫폼의 설계)

  • Lee, Myeong-Ho
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.7
    • /
    • pp.45-51
    • /
    • 2021
  • In accordance with the rapid non-face-to-face environment and mobile first strategy, the explosive increase and creation of many structured/unstructured data every year demands new decision making and services using big data in all fields. However, there have been few reference cases of using the Hadoop Ecosystem, which uses the rapidly increasing big data every year to collect and load big data into a standard platform that can be applied in a practical environment, and then store and process well-established big data in a relational database. Therefore, in this study, after collecting unstructured data searched by keywords from social network services based on Hadoop 2.0 through three virtual machine servers in the Spring Framework environment, the collected unstructured data is loaded into Hadoop Distributed File System and HBase based on the loaded unstructured data, it was designed and implemented to store standardized big data in a relational database using a morpheme analyzer. In the future, research on clustering and classification and analysis using machine learning using Hive or Mahout for deep data analysis should be continued.

Performance Improvement of Context-Sensitive Spelling Error Correction Techniques using Knowledge Graph Embedding of Korean WordNet (alias. KorLex) (한국어 어휘 의미망(alias. KorLex)의 지식 그래프 임베딩을 이용한 문맥의존 철자오류 교정 기법의 성능 향상)

  • Lee, Jung-Hun;Cho, Sanghyun;Kwon, Hyuk-Chul
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.3
    • /
    • pp.493-501
    • /
    • 2022
  • This paper is a study on context-sensitive spelling error correction and uses the Korean WordNet (KorLex)[1] that defines the relationship between words as a graph to improve the performance of the correction[2] based on the vector information of the word embedded in the correction technique. The Korean WordNet replaced WordNet[3] developed at Princeton University in the United States and was additionally constructed for Korean. In order to learn a semantic network in graph form or to use it for learned vector information, it is necessary to transform it into a vector form by embedding learning. For transformation, we list the nodes (limited number) in a line format like a sentence in a graph in the form of a network before the training input. One of the learning techniques that use this strategy is Deepwalk[4]. DeepWalk is used to learn graphs between words in the Korean WordNet. The graph embedding information is used in concatenation with the word vector information of the learned language model for correction, and the final correction word is determined by the cosine distance value between the vectors. In this paper, In order to test whether the information of graph embedding affects the improvement of the performance of context- sensitive spelling error correction, a confused word pair was constructed and tested from the perspective of Word Sense Disambiguation(WSD). In the experimental results, the average correction performance of all confused word pairs was improved by 2.24% compared to the baseline correction performance.

Training a semantic segmentation model for cracks in the concrete lining of tunnel (터널 콘크리트 라이닝 균열 분석을 위한 의미론적 분할 모델 학습)

  • Ham, Sangwoo;Bae, Soohyeon;Kim, Hwiyoung;Lee, Impyeong;Lee, Gyu-Phil;Kim, Donggyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.6
    • /
    • pp.549-558
    • /
    • 2021
  • In order to keep infrastructures such as tunnels and underground facilities safe, cracks of concrete lining in tunnel should be detected by regular inspections. Since regular inspections are accomplished through manual efforts using maintenance lift vehicles, it brings about traffic jam, exposes works to dangerous circumstances, and deteriorates consistency of crack inspection data. This study aims to provide methodology to automatically extract cracks from tunnel concrete lining images generated by the existing tunnel image acquisition system. Specifically, we train a deep learning based semantic segmentation model with open dataset, and evaluate its performance with the dataset from the existing tunnel image acquisition system. In particular, we compare the model performance in case of using all of a public dataset, subset of the public dataset which are related to tunnel surfaces, and the tunnel-related subset with negative examples. As a result, the model trained using the tunnel-related subset with negative examples reached the best performance. In the future, we expect that this research can be used for planning efficient model training strategy for crack detection.

Classification of Industrial Parks and Quarries Using U-Net from KOMPSAT-3/3A Imagery (KOMPSAT-3/3A 영상으로부터 U-Net을 이용한 산업단지와 채석장 분류)

  • Che-Won Park;Hyung-Sup Jung;Won-Jin Lee;Kwang-Jae Lee;Kwan-Young Oh;Jae-Young Chang;Moung-Jin Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_3
    • /
    • pp.1679-1692
    • /
    • 2023
  • South Korea is a country that emits a large amount of pollutants as a result of population growth and industrial development and is also severely affected by transboundary air pollution due to its geographical location. As pollutants from both domestic and foreign sources contribute to air pollution in Korea, the location of air pollutant emission sources is crucial for understanding the movement and distribution of pollutants in the atmosphere and establishing national-level air pollution management and response strategies. Based on this background, this study aims to effectively acquire spatial information on domestic and international air pollutant emission sources, which is essential for analyzing air pollution status, by utilizing high-resolution optical satellite images and deep learning-based image segmentation models. In particular, industrial parks and quarries, which have been evaluated as contributing significantly to transboundary air pollution, were selected as the main research subjects, and images of these areas from multi-purpose satellites 3 and 3A were collected, preprocessed, and converted into input and label data for model training. As a result of training the U-Net model using this data, the overall accuracy of 0.8484 and mean Intersection over Union (mIoU) of 0.6490 were achieved, and the predicted maps showed significant results in extracting object boundaries more accurately than the label data created by course annotations.

Text Mining-Based Emerging Trend Analysis for e-Learning Contents Targeting for CEO (텍스트마이닝을 통한 최고경영자 대상 이러닝 콘텐츠 트렌드 분석)

  • Kyung-Hoon Kim;Myungsin Chae;Byungtae Lee
    • Information Systems Review
    • /
    • v.19 no.2
    • /
    • pp.1-19
    • /
    • 2017
  • Original scripts of e-learning lectures for the CEOs of corporation S were analyzed using topic analysis, which is a text mining method. Twenty-two topics were extracted based on the keywords chosen from five-year records that ranged from 2011 to 2015. Research analysis was then conducted on various issues. Promising topics were selected through evaluation and element analysis of the members of each topic. In management and economics, members demonstrated high satisfaction and interest toward topics in marketing strategy, human resource management, and communication. Philosophy, history of war, and history demonstrated high interest and satisfaction in the field of humanities, whereas mind health showed high interest and satisfaction in the field of in lifestyle. Studies were also conducted to identify topics on the proportion of content, but these studies failed to increase member satisfaction. In the field of IT, educational content responds sensitively to change of the times, but it may not increase the interest and satisfaction of members. The present study found that content production for CEOs should draw out deep implications for value innovation through technology application instead of simply ending the technical aspect of information delivery. Previous studies classified contents superficially based on the name of content program when analyzing the status of content operation. However, text mining can derive deep content and subject classification based on the contents of unstructured data script. This approach can examine current shortages and necessary fields if the service contents of the themes are displayed by year. This study was based on data obtained from influential e-learning companies in Korea. Obtaining practical results was difficult because data were not acquired from portal sites or social networking service. The content of e-learning trends of CEOs were analyzed. Data analysis was also conducted on the intellectual interests of CEOs in each field.

The Effect of Changes in Airbnb Host's Marketing Strategy on Listing Performance in the COVID-19 Pandemic (COVID-19 팬데믹에서 Airbnb 호스트의 마케팅 전략의 변화가 공유성과에 미치는 영향)

  • Kim, So Yeong;Sim, Ji Hwan;Chung, Yeo Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.1-27
    • /
    • 2021
  • The entire tourism industry is being hit hard by the COVID-19 as a global pandemic. Accommodation sharing services such as Airbnb, which have recently expanded due to the spread of the sharing economy, are particularly affected by the pandemic because transactions are made based on trust and communication between consumer and supplier. As the pandemic situation changes individuals' perceptions and behavior of travel, strategies for the recovery of the tourism industry have been discussed. However, since most studies present macro strategies in terms of traditional lodging providers and the government, there is a significant lack of discussion on differentiated pandemic response strategies considering the peculiarity of the sharing economy centered on peer-to-peer transactions. This study discusses the marketing strategy for individual hosts of Airbnb during COVID-19. We empirically analyze the effect of changes in listing descriptions posted by the Airbnb hosts on listing performance after COVID-19 was outbroken. We extract nine aspects described in the listing descriptions using the Attention-Based Aspect Extraction model, which is a deep learning-based aspect extraction method. We model the effect of aspect changes on listing performance after the COVID-19 by observing the frequency of each aspect appeared in the text. In addition, we compare those effects across the types of Airbnb listing. Through this, this study presents an idea for a pandemic crisis response strategy that individual service providers of accommodation sharing services can take depending on the listing type.

Leision Detection in Chest X-ray Images based on Coreset of Patch Feature (패치 특징 코어세트 기반의 흉부 X-Ray 영상에서의 병변 유무 감지)

  • Kim, Hyun-bin;Chun, Jun-Chul
    • Journal of Internet Computing and Services
    • /
    • v.23 no.3
    • /
    • pp.35-45
    • /
    • 2022
  • Even in recent years, treatment of first-aid patients is still often delayed due to a shortage of medical resources in marginalized areas. Research on automating the analysis of medical data to solve the problems of inaccessibility for medical services and shortage of medical personnel is ongoing. Computer vision-based medical inspection automation requires a lot of cost in data collection and labeling for training purposes. These problems stand out in the works of classifying lesion that are rare, or pathological features and pathogenesis that are difficult to clearly define visually. Anomaly detection is attracting as a method that can significantly reduce the cost of data collection by adopting an unsupervised learning strategy. In this paper, we propose methods for detecting abnormal images on chest X-RAY images as follows based on existing anomaly detection techniques. (1) Normalize the brightness range of medical images resampled as optimal resolution. (2) Some feature vectors with high representative power are selected in set of patch features extracted as intermediate-level from lesion-free images. (3) Measure the difference from the feature vectors of lesion-free data selected based on the nearest neighbor search algorithm. The proposed system can simultaneously perform anomaly classification and localization for each image. In this paper, the anomaly detection performance of the proposed system for chest X-RAY images of PA projection is measured and presented by detailed conditions. We demonstrate effect of anomaly detection for medical images by showing 0.705 classification AUROC for random subset extracted from the PadChest dataset. The proposed system can be usefully used to improve the clinical diagnosis workflow of medical institutions, and can effectively support early diagnosis in medically poor area.

Adversarial Framework for Joint Light Field Super-resolution and Deblurring (라이트필드 초해상도와 블러 제거의 동시 수행을 위한 적대적 신경망 모델)

  • Lumentut, Jonathan Samuel;Baek, Hyungsun;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.672-684
    • /
    • 2020
  • Restoring a low resolution and motion blurred light field has become essential due to the growing works on parallax-based image processing. These tasks are known as light-field enhancement process. Unfortunately, only a few state-of-the-art methods are introduced to solve the multiple problems jointly. In this work, we design a framework that jointly solves light field spatial super-resolution and motion deblurring tasks. Particularly, we generate a straight-forward neural network that is trained under low-resolution and 6-degree-of-freedom (6-DOF) motion-blurred light field dataset. Furthermore, we propose the strategy of local region optimization on the adversarial network to boost the performance. We evaluate our method through both quantitative and qualitative measurements and exhibit superior performance compared to the state-of-the-art methods.