• Title/Summary/Keyword: Computing system

Search Result 5,906, Processing Time 0.037 seconds

Cyber Threats Analysis of AI Voice Recognition-based Services with Automatic Speaker Verification (화자식별 기반의 AI 음성인식 서비스에 대한 사이버 위협 분석)

  • Hong, Chunho;Cho, Youngho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.6
    • /
    • pp.33-40
    • /
    • 2021
  • Automatic Speech Recognition(ASR) is a technology that analyzes human speech sound into speech signals and then automatically converts them into character strings that can be understandable by human. Speech recognition technology has evolved from the basic level of recognizing a single word to the advanced level of recognizing sentences consisting of multiple words. In real-time voice conversation, the high recognition rate improves the convenience of natural information delivery and expands the scope of voice-based applications. On the other hand, with the active application of speech recognition technology, concerns about related cyber attacks and threats are also increasing. According to the existing studies, researches on the technology development itself, such as the design of the Automatic Speaker Verification(ASV) technique and improvement of accuracy, are being actively conducted. However, there are not many analysis studies of attacks and threats in depth and variety. In this study, we propose a cyber attack model that bypasses voice authentication by simply manipulating voice frequency and voice speed for AI voice recognition service equipped with automated identification technology and analyze cyber threats by conducting extensive experiments on the automated identification system of commercial smartphones. Through this, we intend to inform the seriousness of the related cyber threats and raise interests in research on effective countermeasures.

Development of a modified model for predicting cabbage yield based on soil properties using GIS (GIS를 이용한 토양정보 기반의 배추 생산량 예측 수정모델 개발)

  • Choi, Yeon Oh;Lee, Jaehyeon;Sim, Jae Hoo;Lee, Seung Woo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.5
    • /
    • pp.449-456
    • /
    • 2022
  • This study proposes a deep learning algorithm to predict crop yield using GIS (Geographic Information System) to extract soil properties from Soilgrids and soil suitability class maps. The proposed model modified the structure of a published CNN-RNN (Convolutional Neural Network-Recurrent Neural Network) based crop yield prediction model suitable for the domestic crop environment. The existing model has two characteristics. The first is that it replaces the original yield with the average yield of the year, and the second is that it trains the data of the predicted year. The new model uses the original field value to ensure accuracy, and the network structure has been improved so that it can train only with data prior to the year to be predicted. The proposed model predicted the yield per unit area of autumn cabbage for kimchi by region based on weather, soil, soil suitability classes, and yield data from 1980 to 2020. As a result of computing and predicting data for each of the four years from 2018 to 2021, the error amount for the test data set was about 10%, enabling accurate yield prediction, especially in regions with a large proportion of total yield. In addition, both the proposed model and the existing model show that the error gradually decreases as the number of years of training data increases, resulting in improved general-purpose performance as the number of training data increases.

Real-Time GPU Task Monitoring and Node List Management Techniques for Container Deployment in a Cluster-Based Container Environment (클러스터 기반 컨테이너 환경에서 실시간 GPU 작업 모니터링 및 컨테이너 배치를 위한 노드 리스트 관리기법)

  • Jihun, Kang;Joon-Min, Gil
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.381-394
    • /
    • 2022
  • Recently, due to the personalization and customization of data, Internet-based services have increased requirements for real-time processing, such as real-time AI inference and data analysis, which must be handled immediately according to the user's situation or requirement. Real-time tasks have a set deadline from the start of each task to the return of the results, and the guarantee of the deadline is directly linked to the quality of the services. However, traditional container systems are limited in operating real-time tasks because they do not provide the ability to allocate and manage deadlines for tasks executed in containers. In addition, tasks such as AI inference and data analysis basically utilize graphical processing units (GPU), which typically have performance impacts on each other because performance isolation is not provided between containers. And the resource usage of the node alone cannot determine the deadline guarantee rate of each container or whether to deploy a new real-time container. In this paper, we propose a monitoring technique for tracking and managing the execution status of deadlines and real-time GPU tasks in containers to support real-time processing of GPU tasks running on containers, and a node list management technique for container placement on appropriate nodes to ensure deadlines. Furthermore, we demonstrate from experiments that the proposed technique has a very small impact on the system.

Comparative Analysis and Implications of Command and Control(C2)-related Information Exchange Models (지휘통제 관련 정보교환모델 비교분석 및 시사점)

  • Kim, Kunyoung;Park, Gyudong;Sohn, Mye
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.59-69
    • /
    • 2022
  • For effective battlefield situation awareness and command resolution, information exchange without seams between systems is essential. However, since each system was developed independently for its own purposes, it is necessary to ensure interoperability between systems in order to effectively exchange information. In the case of our military, semantic interoperability is guaranteed by utilizing the common message format for data exchange. However, simply standardizing the data exchange format cannot sufficiently guarantee interoperability between systems. Currently, the U.S. and NATO are developing and utilizing information exchange models to achieve semantic interoperability further than guaranteeing a data exchange format. The information exchange models are the common vocabulary or reference model,which are used to ensure the exchange of information between systems at the content-meaning level. The information exchange models developed and utilized in the United States initially focused on exchanging information directly related to the battlefield situation, but it has developed into the universal form that can be used by whole government departments and related organizations. On the other hand, NATO focused on strictly expressing the concepts necessary to carry out joint military operations among the countries, and the scope of the models was also limited to the concepts related to command and control. In this paper, the background, purpose, and characteristics of the information exchange models developed and used in the United States and NATO were identified, and comparative analysis was performed. Through this, we intend to present implications when developing a Korean information exchange model in the future.

Detection of Signs of Hostile Cyber Activity against External Networks based on Autoencoder (오토인코더 기반의 외부망 적대적 사이버 활동 징후 감지)

  • Park, Hansol;Kim, Kookjin;Jeong, Jaeyeong;Jang, jisu;Youn, Jaepil;Shin, Dongkyoo
    • Journal of Internet Computing and Services
    • /
    • v.23 no.6
    • /
    • pp.39-48
    • /
    • 2022
  • Cyberattacks around the world continue to increase, and their damage extends beyond government facilities and affects civilians. These issues emphasized the importance of developing a system that can identify and detect cyber anomalies early. As above, in order to effectively identify cyber anomalies, several studies have been conducted to learn BGP (Border Gateway Protocol) data through a machine learning model and identify them as anomalies. However, BGP data is unbalanced data in which abnormal data is less than normal data. This causes the model to have a learning biased result, reducing the reliability of the result. In addition, there is a limit in that security personnel cannot recognize the cyber situation as a typical result of machine learning in an actual cyber situation. Therefore, in this paper, we investigate BGP (Border Gateway Protocol) that keeps network records around the world and solve the problem of unbalanced data by using SMOTE. After that, assuming a cyber range situation, an autoencoder classifies cyber anomalies and visualizes the classified data. By learning the pattern of normal data, the performance of classifying abnormal data with 92.4% accuracy was derived, and the auxiliary index also showed 90% performance, ensuring reliability of the results. In addition, it is expected to be able to effectively defend against cyber attacks because it is possible to effectively recognize the situation by visualizing the congested cyber space.

LSTM-based Fire and Odor Prediction Model for Edge System (엣지 시스템을 위한 LSTM 기반 화재 및 악취 예측 모델)

  • Youn, Joosang;Lee, TaeJin
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.2
    • /
    • pp.67-72
    • /
    • 2022
  • Recently, various intelligent application services using artificial intelligence are being actively developed. In particular, research on artificial intelligence-based real-time prediction services is being actively conducted in the manufacturing industry, and the demand for artificial intelligence services that can detect and predict fire and odors is very high. However, most of the existing detection and prediction systems do not predict the occurrence of fires and odors, but rather provide detection services after occurrence. This is because AI-based prediction service technology is not applied in existing systems. In addition, fire prediction, odor detection and odor level prediction services are services with ultra-low delay characteristics. Therefore, in order to provide ultra-low-latency prediction service, edge computing technology is combined with artificial intelligence models, so that faster inference results can be applied to the field faster than the cloud is being developed. Therefore, in this paper, we propose an LSTM algorithm-based learning model that can be used for fire prediction and odor detection/prediction, which are most required in the manufacturing industry. In addition, the proposed learning model is designed to be implemented in edge devices, and it is proposed to receive real-time sensor data from the IoT terminal and apply this data to the inference model to predict fire and odor conditions in real time. The proposed model evaluated the prediction accuracy of the learning model through three performance indicators, and the evaluation result showed an average performance of over 90%.

Case study of information curriculum for upper-grade students of elementary school (초등학교 고학년 정보 교육과정 사례 연구)

  • Kang, Seol-Joo;Park, Phanwoo;Kim, Wooyeol;Bae, Youngkwon
    • Journal of The Korean Association of Information Education
    • /
    • v.26 no.4
    • /
    • pp.229-238
    • /
    • 2022
  • At the time of discussing the 2022 revised curriculum, the demand for normalization of information education is increasing. This study was conducted on the case of the information curriculum for the upper elementary grades responding to such needs. For 14 6th grade students of Elementary School B in K Metropolitan City, 4 core areas of the information curriculum, including computing system, data, algorithm & programming, and digital culture, were covered through classes. Cooperative classes were conducted between students by using the cloud-based application according to the class. In addition, it was intended to supplement the curriculum by suggesting ideas for artificial intelligence education area, and to improve the density of research with additional investigation on foreign information education cases. However, the need for independent organization of the information curriculum was strongly confirmed in that the current curriculum for information classes lacked sufficient school hours and had to be operated in combination with other subjects in the form of a project for this case study. It is hoped that this study will serve as a small foundation for the establishment of the information curriculum for the upper elementary grades in the future.

Introduction to the Technology of Digital Groundwater (Digital Groundwater의 기술 소개)

  • Hyeon-Sik Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.10-10
    • /
    • 2023
  • 본질적으로 복잡하고 다양한 특성을 가지는 우리나라(도시, 농어촌, 도서산간, 섬 등)의 물 공급 시스템은 생활수준의 향상, 기후변화 및 가뭄위기, 소비환경 중심의 요구와 한정된 수자원을 잘 활용하기 위한 운영 및 관리가 매우 복잡하다. 이로 인한 수자원 고갈과 가뭄위기 등에 관련한 대책 및 방안으로 대체수자원인 지하수 활용방안들이 제시되고 있다. 따라서, 물 관리 시스템과 관련한 디지털 기술은 오늘날 플랫폼과 디지털 트윈의 도입을 통해 네트워크와 가상현실 세계의 연결이 통합되어진 4차 산업혁명 사업이 현실화되고 있다. 물 관리 시스템에 사용된 새로운 디지털 기술 "BDA(Big Data Analytics), CPS(Cyber Physical System), IoT(Internet of Things), CC(Cloud Computing), AI(Artificial Intelligence)" 등의 성장이 증가함에 따라 가뭄대응 위기와 도시 지하수 물 순환 시스템 운영이 증가하는 소비자 중심의 수요를 충족시키기 위해서는 지속가능한 지하수 공급을 효과적으로 관리되어야 한다. 4차 산업혁명과 관련한 기술성장이 증가함으로 인한 물 부문은 시스템의 지속가능성을 향상시키기 위해 전체 디지털화 단계로 이동하고 있다. 이러한 디지털 전환의 핵심은 데이터에 관한 것이며, 이를 활용하여 가치 창출을 위해서 "Digital Groundwater Technology/Twin(DGT)"를 극대화하는 방식으로 제고해야 한다. 현재 당면하고 있는 기후위기에 따른 가뭄, 홍수, 녹조, 탁수, 대체수자원 등의 수자원 재해에 대한 다양한 대응 방안과 수자원 확보 기술이 논의되고 있다. 이에 따른 "물 순환 시스템"의 이해와 함께 문제해결 방안도출을 위하여 이번 "기획 세션"에서는 지하수 수량 및 수질, 정수, 모니터링, 모델링, 운영/관리 등의 수자원 데이터의 플랫폼 동시성 구축으로부터 역동적인 "DGT"을 통한 디지털 트윈화하여, 지표수-토양-지하수 분야의 특화된 연직 프로파일링 관측기술을 다각도로 모색하고자 한다. "Digital Groundwater(DG)"는 지하수의 물 순환, 수량 및 수질 관리, 지표수-지하수 순환 및 모니터링, 지하수 예측 모델링 통합연계를 위해 지하수 플랫폼 동시성, ChatGPT, CPS 및 DT 등의 복합 디지털화 단계로 나가고 있다. 복잡한 지하환경의 이해와 관리 및 보존을 위한 지하수 네트워크에서 수량과 수질 데이터를 수집하기 위한 스마트 지하수 관측기술 개발은 큰 도전이다. 스마트 지하수 관측기술은 BD분석, AI 및 클라우드 컴퓨팅 등의 디지털 기술에 필요한 획득된 데이터 분석에 사용되는 알고리즘의 복잡성과 데이터 품질에 따라 영향을 미칠 수 있기 때문이다. "DG"는 지하수의 정보화 및 네트워크 운영관리 자동화, 지능화 등을 위한 디지털 도구를 활용함으로써 지표수-토양층-지하수 네트워크 통합관리에 대한 비전을 만들 수 있다. 또한, DGT는 지하수 관측센서의 1차원 데이터 융합을 이용한 지하수 플랫폼 동시성과 디지털 트윈을 연계할 수 있다.

  • PDF

A Study on Effective Adversarial Attack Creation for Robustness Improvement of AI Models (AI 모델의 Robustness 향상을 위한 효율적인 Adversarial Attack 생성 방안 연구)

  • Si-on Jeong;Tae-hyun Han;Seung-bum Lim;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.4
    • /
    • pp.25-36
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology is introduced in various fields, including security, the development of technology is accelerating. However, with the development of AI technology, attack techniques that cleverly bypass malicious behavior detection are also developing. In the classification process of AI models, an Adversarial attack has emerged that induces misclassification and a decrease in reliability through fine adjustment of input values. The attacks that will appear in the future are not new attacks created by an attacker but rather a method of avoiding the detection system by slightly modifying existing attacks, such as Adversarial attacks. Developing a robust model that can respond to these malware variants is necessary. In this paper, we propose two methods of generating Adversarial attacks as efficient Adversarial attack generation techniques for improving Robustness in AI models. The proposed technique is the XAI-based attack technique using the XAI technique and the Reference based attack through the model's decision boundary search. After that, a classification model was constructed through a malicious code dataset to compare performance with the PGD attack, one of the existing Adversarial attacks. In terms of generation speed, XAI-based attack, and reference-based attack take 0.35 seconds and 0.47 seconds, respectively, compared to the existing PGD attack, which takes 20 minutes, showing a very high speed, especially in the case of reference-based attack, 97.7%, which is higher than the existing PGD attack's generation rate of 75.5%. Therefore, the proposed technique enables more efficient Adversarial attacks and is expected to contribute to research to build a robust AI model in the future.

Acceleration of Viewport Extraction for Multi-Object Tracking Results in 360-degree Video (360도 영상에서 다중 객체 추적 결과에 대한 뷰포트 추출 가속화)

  • Heesu Park;Seok Ho Baek;Seokwon Lee;Myeong-jin Lee
    • Journal of Advanced Navigation Technology
    • /
    • v.27 no.3
    • /
    • pp.306-313
    • /
    • 2023
  • Realistic and graphics-based virtual reality content is based on 360-degree videos, and viewport extraction through the viewer's intention or automatic recommendation function is essential. This paper designs a viewport extraction system based on multiple object tracking in 360-degree videos and proposes a parallel computing structure necessary for multiple viewport extraction. The viewport extraction process in 360-degree videos is parallelized by composing pixel-wise threads, through 3D spherical surface coordinate transformation from ERP coordinates and 2D coordinate transformation of 3D spherical surface coordinates within the viewport. The proposed structure evaluated the computation time for up to 30 viewport extraction processes in aerial 360-degree video sequences and confirmed up to 5240 times acceleration compared to the CPU-based computation time proportional to the number of viewports. When using high-speed I/O or memory buffers that can reduce ERP frame I/O time, viewport extraction time can be further accelerated by 7.82 times. The proposed parallelized viewport extraction structure can be applied to simultaneous multi-access services for 360-degree videos or virtual reality contents and video summarization services for individual users.