• Title/Summary/Keyword: Processing Software

Search Result 5,893, Processing Time 0.03 seconds

Stochastic Self-similarity Analysis and Visualization of Earthquakes on the Korean Peninsula (한반도에서 발생한 지진의 통계적 자기 유사성 분석 및 시각화)

  • JaeMin Hwang;Jiyoung Lim;Hae-Duck J. Jeong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.11
    • /
    • pp.493-504
    • /
    • 2023
  • The Republic of Korea is located far from the boundary of the earthquake plate, and the intra-plate earthquake occurring in these areas is generally small in size and less frequent than the interplate earthquake. Nevertheless, as a result of investigating and analyzing earthquakes that occurred on the Korean Peninsula between the past two years and 1904 and earthquakes that occurred after observing recent earthquakes on the Korean Peninsula, it was found that of a magnitude of 9. In this paper, the Korean Peninsula Historical Earthquake Record (2 years to 1904) published by the National Meteorological Research Institute is used to analyze the relationship between earthquakes on the Korean Peninsula and statistical self-similarity. In addition, the problem solved through this paper was the first to investigate the relationship between earthquake data occurring on the Korean Peninsula and statistical self-similarity. As a result of measuring the degree of self-similarity of earthquakes on the Korean Peninsula using three quantitative estimation methods, the self-similarity parameter H value (0.5 < H < 1) was found to be above 0.8 on average, indicating a high degree of self-similarity. And through graph visualization, it can be easily figured out in which region earthquakes occur most often, and it is expected that it can be used in the development of a prediction system that can predict damage in the event of an earthquake in the future and minimize damage to property and people, as well as in earthquake data analysis and modeling research. Based on the findings of this study, the self-similar process is expected to help understand the patterns and statistical characteristics of seismic activities, group and classify similar seismic events, and be used for prediction of seismic activities, seismic risk assessments, and seismic engineering.

Gear Fault Diagnosis Based on Residual Patterns of Current and Vibration Data by Collaborative Robot's Motions Using LSTM (LSTM을 이용한 협동 로봇 동작별 전류 및 진동 데이터 잔차 패턴 기반 기어 결함진단)

  • Baek Ji Hoon;Yoo Dong Yeon;Lee Jung Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.445-454
    • /
    • 2023
  • Recently, various fault diagnosis studies are being conducted utilizing data from collaborative robots. Existing studies performing fault diagnosis on collaborative robots use static data collected based on the assumed operation of predefined devices. Therefore, the fault diagnosis model has a limitation of increasing dependency on the learned data patterns. Additionally, there is a limitation in that a diagnosis reflecting the characteristics of collaborative robots operating with multiple joints could not be conducted due to experiments using a single motor. This paper proposes an LSTM diagnostic model that can overcome these two limitations. The proposed method selects representative normal patterns using the correlation analysis of vibration and current data in single-axis and multi-axis work environments, and generates residual patterns through differences from the normal representative patterns. An LSTM model that can perform gear wear diagnosis for each axis is created using the generated residual patterns as inputs. This fault diagnosis model can not only reduce the dependence on the model's learning data patterns through representative patterns for each operation, but also diagnose faults occurring during multi-axis operation. Finally, reflecting both internal and external data characteristics, the fault diagnosis performance was improved, showing a high diagnostic performance of 98.57%.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.

A Study on Korean Speech Animation Generation Employing Deep Learning (딥러닝을 활용한 한국어 스피치 애니메이션 생성에 관한 고찰)

  • Suk Chan Kang;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.461-470
    • /
    • 2023
  • While speech animation generation employing deep learning has been actively researched for English, there has been no prior work for Korean. Given the fact, this paper for the very first time employs supervised deep learning to generate Korean speech animation. By doing so, we find out the significant effect of deep learning being able to make speech animation research come down to speech recognition research which is the predominating technique. Also, we study the way to make best use of the effect for Korean speech animation generation. The effect can contribute to efficiently and efficaciously revitalizing the recently inactive Korean speech animation research, by clarifying the top priority research target. This paper performs this process: (i) it chooses blendshape animation technique, (ii) implements the deep-learning model in the master-servant pipeline of the automatic speech recognition (ASR) module and the facial action coding (FAC) module, (iii) makes Korean speech facial motion capture dataset, (iv) prepares two comparison deep learning models (one model adopts the English ASR module, the other model adopts the Korean ASR module, however both models adopt the same basic structure for their FAC modules), and (v) train the FAC modules of both models dependently on their ASR modules. The user study demonstrates that the model which adopts the Korean ASR module and dependently trains its FAC module (getting 4.2/5.0 points) generates decisively much more natural Korean speech animations than the model which adopts the English ASR module and dependently trains its FAC module (getting 2.7/5.0 points). The result confirms the aforementioned effect showing that the quality of the Korean speech animation comes down to the accuracy of Korean ASR.

Comparative analysis of the digital circuit designing ability of ChatGPT (ChatGPT을 활용한 디지털회로 설계 능력에 대한 비교 분석)

  • Kihun Nam
    • The Journal of the Convergence on Culture Technology
    • /
    • v.9 no.6
    • /
    • pp.967-971
    • /
    • 2023
  • Recently, a variety of AI-based platform services are available, and one of them is ChatGPT that processes a large quantity of data in the natural language and generates an answer after self-learning. ChatGPT can perform various tasks including software programming in the IT sector. Particularly, it may help generate a simple program and correct errors using C Language, which is a major programming language. Accordingly, it is expected that ChatGPT is capable of effectively using Verilog HDL, which is a hardware language created in C Language. Verilog HDL synthesis, however, is to generate imperative sentences in a logical circuit form and thus it needs to be verified whether the products are executed properly. In this paper, we aim to select small-scale logical circuits for ease of experimentation and to verify the results of circuits generated by ChatGPT and human-designed circuits. As to experimental environments, Xilinx ISE 14.7 was used for module modeling, and the xc3s1000 FPGA chip was used for module embodiment. Comparative analysis was performed on the use area and processing time of FPGA to compare the performance of ChatGPT products and Verilog HDL products.

Efficient Stack Smashing Attack Detection Method Using DSLR (DSLR을 이용한 효율적인 스택스매싱 공격탐지 방법)

  • Do Yeong Hwang;Dong-Young Yoo
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.12 no.9
    • /
    • pp.283-290
    • /
    • 2023
  • With the recent steady development of IoT technology, it is widely used in medical systems and smart TV watches. 66% of software development is developed through language C, which is vulnerable to memory attacks, and acts as a threat to IoT devices using language C. A stack-smashing overflow attack inserts a value larger than the user-defined buffer size, overwriting the area where the return address is stored, preventing the program from operating normally. IoT devices with low memory capacity are vulnerable to stack smashing overflow attacks. In addition, if the existing vaccine program is applied as it is, the IoT device will not operate normally. In order to defend against stack smashing overflow attacks on IoT devices, we used canaries among several detection methods to set conditions with random values, checksum, and DSLR (random storage locations), respectively. Two canaries were placed within the buffer, one in front of the return address, which is the end of the buffer, and the other was stored in a random location in-buffer. This makes it difficult for an attacker to guess the location of a canary stored in a fixed location by storing the canary in a random location because it is easy for an attacker to predict its location. After executing the detection program, after a stack smashing overflow attack occurs, if each condition is satisfied, the program is terminated. The set conditions were combined to create a number of eight cases and tested. Through this, it was found that it is more efficient to use a detection method using DSLR than a detection method using multiple conditions for IoT devices.

Effective Multi-Modal Feature Fusion for 3D Semantic Segmentation with Multi-View Images (멀티-뷰 영상들을 활용하는 3차원 의미적 분할을 위한 효과적인 멀티-모달 특징 융합)

  • Hye-Lim Bae;Incheol Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.505-518
    • /
    • 2023
  • 3D point cloud semantic segmentation is a computer vision task that involves dividing the point cloud into different objects and regions by predicting the class label of each point. Existing 3D semantic segmentation models have some limitations in performing sufficient fusion of multi-modal features while ensuring both characteristics of 2D visual features extracted from RGB images and 3D geometric features extracted from point cloud. Therefore, in this paper, we propose MMCA-Net, a novel 3D semantic segmentation model using 2D-3D multi-modal features. The proposed model effectively fuses two heterogeneous 2D visual features and 3D geometric features by using an intermediate fusion strategy and a multi-modal cross attention-based fusion operation. Also, the proposed model extracts context-rich 3D geometric features from input point cloud consisting of irregularly distributed points by adopting PTv2 as 3D geometric encoder. In this paper, we conducted both quantitative and qualitative experiments with the benchmark dataset, ScanNetv2 in order to analyze the performance of the proposed model. In terms of the metric mIoU, the proposed model showed a 9.2% performance improvement over the PTv2 model using only 3D geometric features, and a 12.12% performance improvement over the MVPNet model using 2D-3D multi-modal features. As a result, we proved the effectiveness and usefulness of the proposed model.

Comparative Analysis of Self-supervised Deephashing Models for Efficient Image Retrieval System (효율적인 이미지 검색 시스템을 위한 자기 감독 딥해싱 모델의 비교 분석)

  • Kim Soo In;Jeon Young Jin;Lee Sang Bum;Kim Won Gyum
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.519-524
    • /
    • 2023
  • In hashing-based image retrieval, the hash code of a manipulated image is different from the original image, making it difficult to search for the same image. This paper proposes and evaluates a self-supervised deephashing model that generates perceptual hash codes from feature information such as texture, shape, and color of images. The comparison models are autoencoder-based variational inference models, but the encoder is designed with a fully connected layer, convolutional neural network, and transformer modules. The proposed model is a variational inference model that includes a SimAM module of extracting geometric patterns and positional relationships within images. The SimAM module can learn latent vectors highlighting objects or local regions through an energy function using the activation values of neurons and surrounding neurons. The proposed method is a representation learning model that can generate low-dimensional latent vectors from high-dimensional input images, and the latent vectors are binarized into distinguishable hash code. From the experimental results on public datasets such as CIFAR-10, ImageNet, and NUS-WIDE, the proposed model is superior to the comparative model and analyzed to have equivalent performance to the supervised learning-based deephashing model. The proposed model can be used in application systems that require low-dimensional representation of images, such as image search or copyright image determination.

Effect of Sound Velocity on Bathymetric Data Aquired by EM120(multi-beam echo sounder) (EM120(multi-beam echo sounder)을 이용한 지형조사 시 적용되는 해수 중 음속 측정의 중요성; 수중음속 측정장비의 특성 비교)

  • Ham, Dong-Jin;Kim, Hyun-Sub;Lee, Gun-Chang
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.13 no.3
    • /
    • pp.295-301
    • /
    • 2008
  • Bathymetric data collected using a multi-beam echo sounder during marine scientific survey is essential for geologic and oceanographic research works. Accurate measurment of sound velocity profile(SVP) in water-column is important for bathymetric data processing. SVP can vary at different locations during the survey undertaken for wide areas. In addition, an observational error can occur when different equipments(Sound Velocity Profiler, Conductivity Temperature Depth, eXpendable BathyThermograph) are used for measuring SVP at the same water column. In this study, we used an MB-system software to show changes in bathymetry caused by variation of SVP. The analyses showed that the sound velocity(SV) changes due to the depth and thickness of thermocline had more significant effects on the resulting bathymetric data than those of surface mixed layer. The observational errors between SVP measuring instruments did not cause much differneces in the processed bathymetric data. Bathymetric survey line is better to be established to the direction that the change of temperature can be minimize to reduce the variation of SVP during the data acquisition along the survey line.

Analysis of the Effectiveness of Big Data-Based Six Sigma Methodology: Focus on DX SS (빅데이터 기반 6시그마 방법론의 유효성 분석: DX SS를 중심으로)

  • Kim Jung Hyuk;Kim Yoon Ki
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.13 no.1
    • /
    • pp.1-16
    • /
    • 2024
  • Over recent years, 6 Sigma has become a key methodology in manufacturing for quality improvement and cost reduction. However, challenges have arisen due to the difficulty in analyzing large-scale data generated by smart factories and its traditional, formal application. To address these limitations, a big data-based 6 Sigma approach has been developed, integrating the strengths of 6 Sigma and big data analysis, including statistical verification, mathematical optimization, interpretability, and machine learning. Despite its potential, the practical impact of this big data-based 6 Sigma on manufacturing processes and management performance has not been adequately verified, leading to its limited reliability and underutilization in practice. This study investigates the efficiency impact of DX SS, a big data-based 6 Sigma, on manufacturing processes, and identifies key success policies for its effective introduction and implementation in enterprises. The study highlights the importance of involving all executives and employees and researching key success policies, as demonstrated by cases where methodology implementation failed due to incorrect policies. This research aims to assist manufacturing companies in achieving successful outcomes by actively adopting and utilizing the methodologies presented.