• Title/Summary/Keyword: Cloud Environment

Search Result 1,324, Processing Time 0.029 seconds

Analysis of news bigdata on 'Gather Town' using the Bigkinds system

  • Choi, Sui
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.3
    • /
    • pp.53-61
    • /
    • 2022
  • Recent years have drawn a great attention to generation MZ and Metaverse, due to 4th industrial revolution and the development of digital environment that blurs the boundary between reality and virtual reality. Generation MZ approaches the information very differently from the existing generations and uses distinguished communication methods. In terms of learning, they have different motivations, types, skills and build relationships differently. Meanwhile, Metaverse is drawing a great attention as a teaching method that fits traits of gen MZ. Thus, the current research aimed to investigate how to increase the use of Metaverse in Educational Technology. Specifically, this research examined the antecedents of popularity of Gather Town, a platform of Metaverse. Big data of news articles have been collected and analyzed using the Bigkinds system provided by Korea Press Foundation. The analysis revealed, first, a rapid increasing trend of media exposure of Gather Town since July 2021. This suggests a greater utilization of Gather Town in the field of education after the COVID-19 pandemic. Second, Word Association Analysis and Word Cloud Analysis showed high weights on education related words such as 'remote', 'university', and 'freshman', while words like 'Metaverse', 'Metaverse platform', 'Covid19', and 'Avatar' were also emphasized. Third, Network Analysis extracted 'COVID19', 'Avatar', 'University student', 'career', 'YouTube' as keywords. The findings also suggest potential value of Gather Town as an educational tool under COVID19 pandemic. Therefore, this research will contribute to the application and utilization of Gather Town in the field of education.

Anomaly Detection Methodology Based on Multimodal Deep Learning (멀티모달 딥 러닝 기반 이상 상황 탐지 방법론)

  • Lee, DongHoon;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.101-125
    • /
    • 2022
  • Recently, with the development of computing technology and the improvement of the cloud environment, deep learning technology has developed, and attempts to apply deep learning to various fields are increasing. A typical example is anomaly detection, which is a technique for identifying values or patterns that deviate from normal data. Among the representative types of anomaly detection, it is very difficult to detect a contextual anomaly that requires understanding of the overall situation. In general, detection of anomalies in image data is performed using a pre-trained model trained on large data. However, since this pre-trained model was created by focusing on object classification of images, there is a limit to be applied to anomaly detection that needs to understand complex situations created by various objects. Therefore, in this study, we newly propose a two-step pre-trained model for detecting abnormal situation. Our methodology performs additional learning from image captioning to understand not only mere objects but also the complicated situation created by them. Specifically, the proposed methodology transfers knowledge of the pre-trained model that has learned object classification with ImageNet data to the image captioning model, and uses the caption that describes the situation represented by the image. Afterwards, the weight obtained by learning the situational characteristics through images and captions is extracted and fine-tuning is performed to generate an anomaly detection model. To evaluate the performance of the proposed methodology, an anomaly detection experiment was performed on 400 situational images and the experimental results showed that the proposed methodology was superior in terms of anomaly detection accuracy and F1-score compared to the existing traditional pre-trained model.

Development of Registration Post-Processing Technology to Homogenize the Density of the Scan Data of Earthwork Sites (토공현장 스캔데이터 밀도 균일화를 위한 정합 후처리 기술 개발)

  • Kim, Yonggun;Park, Suyeul;Kim, Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.42 no.5
    • /
    • pp.689-699
    • /
    • 2022
  • Recently, high productivity capabilities have been improved due to the application of advanced technologies in various industries, but in the construction industry, productivity improvements have been relatively low. Research on advanced technology for the construction industry is being conducted quickly to overcome the current low productivity. Among advanced technologies, 3D scan technology is widely used for creating 3D digital terrain models at construction sites. In particular, the 3D digital terrain model provides basic data for construction automation processes, such as earthwork machine guidance and control. The quality of the 3D digital terrain model has a lot of influence not only on the performance and acquisition environment of the 3D scanner, but also on the denoising, registration and merging process, which is a preprocessing process for creating a 3D digital terrain model after acquiring terrain scan data. Therefore, it is necessary to improve the terrain scan data processing performance. This study seeks to solve the problem of density inhomogeneity in terrain scan data that arises during the pre-processing step. The study suggests a 'pixel-based point cloud comparison algorithm' and verifies the performance of the algorithm using terrain scan data obtained at an actual earthwork site.

National Agenda Service Model Development Research of Policy Information Portal of National Sejong Library (국립세종도서관 정책정보포털 국정과제 서비스 모형개발 연구)

  • Younghee, Noh;Inho, Chang;Hyojung, Sim
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.33 no.4
    • /
    • pp.73-92
    • /
    • 2022
  • This study intends to design a model that can effectively service policy data necessary for the implementation of new national agenda in order to provide high-quality policy information services that go beyond those of the existing Policy Information Portal (POINT) of National Sejong Library. To this end, it was determined that providing an integrated search environment, in lieu of data search through individual access, was necessary. Subsequently, four possible models for a national agenda service model were presented. First, designing a computerized system for both interface and electronic information source aspects was proposed for the national agenda service system operation. Second, designing the Linked Open Data system and the time-series service system for national policy information, providing the translation service of overseas original data, and securing the researcher's desired data were presented for the national agenda service information source operation. Third, strengthening public relations for policy users, building and promoting the site brand, operating SNS channels, and reinforcing the activation of auxiliary materials and the accessibility of external services were proposed for public relations of national agenda service. Fourth, expanding the information network with Open API, cloud service, and overseas libraries was proposed for collaborating and cooperating with the agenda service.

Research on Case Analysis of Library E-learning Platforms: Focusing on Learning Contents and Functions (도서관 이러닝 플랫폼 사례분석 연구 - 학습 내용 및 기능을 중심으로 -)

  • SangEun, Cho;KyungMook, Oh
    • Journal of the Korean BIBLIA Society for library and Information Science
    • /
    • v.34 no.1
    • /
    • pp.209-238
    • /
    • 2023
  • This study aims to propose the main learning contents, functions and activation plans for building an e-learning platform for libraries through a literature review, case analysis and expert survey. Through the literature review, it was found that libraries must play a role in providing high-quality online education for users in the e-learning ecosystem. Based on the previous studies, a learning function analysis tool was developed for the analysis of the library's e-learning platform. Based on this, the learning contents, learning functions and characteristics of library e-learning platforms were analyzed, and expert surveys and interviews were conducted. As a results, the construction of a platform for effectively applying learning processes and technology is essential for the library's sustainable e-learning services. The contents that should be provided for characteristics of library education, reading guidance, information literacy instruction, library usage instruction, and the latest IT technologies. And The main learning functions include the ability to conduct video lectures and real-time classes among learning types, and learning activity support functions, a cloud platform support function and a personalized environment support function. Additionally, suggested re-education for library staff to improve their technical skills and the formation of an e-learning team.

Real-Time GPU Task Monitoring and Node List Management Techniques for Container Deployment in a Cluster-Based Container Environment (클러스터 기반 컨테이너 환경에서 실시간 GPU 작업 모니터링 및 컨테이너 배치를 위한 노드 리스트 관리기법)

  • Jihun, Kang;Joon-Min, Gil
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.11
    • /
    • pp.381-394
    • /
    • 2022
  • Recently, due to the personalization and customization of data, Internet-based services have increased requirements for real-time processing, such as real-time AI inference and data analysis, which must be handled immediately according to the user's situation or requirement. Real-time tasks have a set deadline from the start of each task to the return of the results, and the guarantee of the deadline is directly linked to the quality of the services. However, traditional container systems are limited in operating real-time tasks because they do not provide the ability to allocate and manage deadlines for tasks executed in containers. In addition, tasks such as AI inference and data analysis basically utilize graphical processing units (GPU), which typically have performance impacts on each other because performance isolation is not provided between containers. And the resource usage of the node alone cannot determine the deadline guarantee rate of each container or whether to deploy a new real-time container. In this paper, we propose a monitoring technique for tracking and managing the execution status of deadlines and real-time GPU tasks in containers to support real-time processing of GPU tasks running on containers, and a node list management technique for container placement on appropriate nodes to ensure deadlines. Furthermore, we demonstrate from experiments that the proposed technique has a very small impact on the system.

Computer Vision-based Continuous Large-scale Site Monitoring System through Edge Computing and Small-Object Detection

  • Kim, Yeonjoo;Kim, Siyeon;Hwang, Sungjoo;Hong, Seok Hwan
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1243-1244
    • /
    • 2022
  • In recent years, the growing interest in off-site construction has led to factories scaling up their manufacturing and production processes in the construction sector. Consequently, continuous large-scale site monitoring in low-variability environments, such as prefabricated components production plants (precast concrete production), has gained increasing importance. Although many studies on computer vision-based site monitoring have been conducted, challenges for deploying this technology for large-scale field applications still remain. One of the issues is collecting and transmitting vast amounts of video data. Continuous site monitoring systems are based on real-time video data collection and analysis, which requires excessive computational resources and network traffic. In addition, it is difficult to integrate various object information with different sizes and scales into a single scene. Various sizes and types of objects (e.g., workers, heavy equipment, and materials) exist in a plant production environment, and these objects should be detected simultaneously for effective site monitoring. However, with the existing object detection algorithms, it is difficult to simultaneously detect objects with significant differences in size because collecting and training massive amounts of object image data with various scales is necessary. This study thus developed a large-scale site monitoring system using edge computing and a small-object detection system to solve these problems. Edge computing is a distributed information technology architecture wherein the image or video data is processed near the originating source, not on a centralized server or cloud. By inferring information from the AI computing module equipped with CCTVs and communicating only the processed information with the server, it is possible to reduce excessive network traffic. Small-object detection is an innovative method to detect different-sized objects by cropping the raw image and setting the appropriate number of rows and columns for image splitting based on the target object size. This enables the detection of small objects from cropped and magnified images. The detected small objects can then be expressed in the original image. In the inference process, this study used the YOLO-v5 algorithm, known for its fast processing speed and widely used for real-time object detection. This method could effectively detect large and even small objects that were difficult to detect with the existing object detection algorithms. When the large-scale site monitoring system was tested, it performed well in detecting small objects, such as workers in a large-scale view of construction sites, which were inaccurately detected by the existing algorithms. Our next goal is to incorporate various safety monitoring and risk analysis algorithms into this system, such as collision risk estimation, based on the time-to-collision concept, enabling the optimization of safety routes by accumulating workers' paths and inferring the risky areas based on workers' trajectory patterns. Through such developments, this continuous large-scale site monitoring system can guide a construction plant's safety management system more effectively.

  • PDF

SHVC-based Texture Map Coding for Scalable Dynamic Mesh Compression (스케일러블 동적 메쉬 압축을 위한 SHVC 기반 텍스처 맵 부호화 방법)

  • Naseong Kwon;Joohyung Byeon;Hansol Choi;Donggyu Sim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.314-328
    • /
    • 2023
  • In this paper, we propose a texture map compression method based on the hierarchical coding method of SHVC to support the scalability function of dynamic mesh compression. The proposed method effectively eliminates the redundancy of multiple-resolution texture maps by downsampling a high-resolution texture map to generate multiple-resolution texture maps and encoding them with SHVC. The dynamic mesh decoder supports the scalability of mesh data by decoding a texture map having an appropriate resolution according to receiver performance and network environment. To evaluate the performance of the proposed method, the proposed method is applied to V-DMC (Video-based Dynamic Mesh Coding) reference software, TMMv1.0, and the performance of the scalable encoder/decoder proposed in this paper and TMMv1.0-based simulcast method is compared. As a result of experiments, the proposed method effectively improves in performance the average of -7.7% and -5.7% in terms of point cloud-based BD-rate (Luma PSNR) in AI and LD conditions compared to the simulcast method, confirming that it is possible to effectively support the texture map scalability of dynamic mesh data through the proposed method.

An Improvement of Kubernetes Auto-Scaling Based on Multivariate Time Series Analysis (다변량 시계열 분석에 기반한 쿠버네티스 오토-스케일링 개선)

  • Kim, Yong Hae;Kim, Young Han
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.11 no.3
    • /
    • pp.73-82
    • /
    • 2022
  • Auto-scaling is one of the most important functions for cloud computing technology. Even if the number of users or service requests is explosively increased or decreased, system resources and service instances can be appropriately expanded or reduced to provide services suitable for the situation and it can improves stability and cost-effectiveness. However, since the policy is performed based on a single metric data at the time of monitoring a specific system resource, there is a problem that the service is already affected or the service instance that is actually needed cannot be managed in detail. To solve this problem, in this paper, we propose a method to predict system resource and service response time using a multivariate time series analysis model and establish an auto-scaling policy based on this. To verify this, implement it as a custom scheduler in the Kubernetes environment and compare it with the Kubernetes default auto-scaling method through experiments. The proposed method utilizes predictive data based on the impact between system resources and response time to preemptively execute auto-scaling for expected situations, thereby securing system stability and providing as much as necessary within the scope of not degrading service quality. It shows results that allow you to manage instances in detail.

Efficient QoS Policy Implementation Using DSCP Redefinition: Towards Network Load Balancing (DSCP 재정의를 통한 효율적인 QoS 정책 구현: 네트워크 부하 분산을 위해)

  • Hanwoo Lee;Suhwan Kim;Gunwoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.3
    • /
    • pp.715-720
    • /
    • 2023
  • The military is driving innovative changes such as AI, cloud computing, and drone operation through the Fourth Industrial Revolution. It is expected that such changes will lead to a rapid increase in the demand for information exchange requirements, reaching all lower-ranking soldiers, as networking based on IoT occurs. The flow of such information must ensure efficient information distribution through various infrastructures such as ground networks, stationary satellites, and low-earth orbit small communication satellites, and the demand for information exchange that is distributed through them must be appropriately dispersed. In this study, we redefined the DSCP, which is closely related to QoS (Quality of Service) in information dissemination, into 11 categories and performed research to map each cluster group identified by cluster analysis to the defense "information exchange requirement list" on a one-to-one basis. The purpose of the research is to ensure efficient information dissemination within a multi-layer integrated network (ground network, stationary satellite network, low-earth orbit small communication satellite network) with limited bandwidth by re-establishing QoS policies that prioritize important information exchange requirements so that they are routed in priority. In this paper, we evaluated how well the information exchange requirement lists classified by cluster analysis were assigned to DSCP through M&S, and confirmed that reclassifying DSCP can lead to more efficient information distribution in a network environment with limited bandwidth.