• Title/Summary/Keyword: experiment training

Search Result 1,046, Processing Time 0.028 seconds

A Study on the Improvement of Source Code Static Analysis Using Machine Learning (기계학습을 이용한 소스코드 정적 분석 개선에 관한 연구)

  • Park, Yang-Hwan;Choi, Jin-Young
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1131-1139
    • /
    • 2020
  • The static analysis of the source code is to find the remaining security weaknesses for a wide range of source codes. The static analysis tool is used to check the result, and the static analysis expert performs spying and false detection analysis on the result. In this process, the amount of analysis is large and the rate of false positives is high, so a lot of time and effort is required, and a method of efficient analysis is required. In addition, it is rare for experts to analyze only the source code of the line where the defect occurred when performing positive/false detection analysis. Depending on the type of defect, the surrounding source code is analyzed together and the final analysis result is delivered. In order to solve the difficulty of experts discriminating positive and false positives using these static analysis tools, this paper proposes a method of determining whether or not the security weakness found by the static analysis tools is a spy detection through artificial intelligence rather than an expert. In addition, the optimal size was confirmed through an experiment to see how the size of the training data (source code around the defects) used for such machine learning affects the performance. This result is expected to help the static analysis expert's job of classifying positive and false positives after static analysis.

Comparison of Deep Learning-based Unsupervised Domain Adaptation Models for Crop Classification (작물 분류를 위한 딥러닝 기반 비지도 도메인 적응 모델 비교)

  • Kwak, Geun-Ho;Park, No-Wook
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.2
    • /
    • pp.199-213
    • /
    • 2022
  • The unsupervised domain adaptation can solve the impractical issue of repeatedly collecting high-quality training data every year for annual crop classification. This study evaluates the applicability of deep learning-based unsupervised domain adaptation models for crop classification. Three unsupervised domain adaptation models including a deep adaptation network (DAN), a deep reconstruction-classification network, and a domain adversarial neural network (DANN) are quantitatively compared via a crop classification experiment using unmanned aerial vehicle images in Hapcheon-gun and Changnyeong-gun, the major garlic and onion cultivation areas in Korea. As source baseline and target baseline models, convolutional neural networks (CNNs) are additionally applied to evaluate the classification performance of the unsupervised domain adaptation models. The three unsupervised domain adaptation models outperformed the source baseline CNN, but the different classification performances were observed depending on the degree of inconsistency between data distributions in source and target images. The classification accuracy of DAN was higher than that of the other two models when the inconsistency between source and target images was low, whereas DANN has the best classification performance when the inconsistency between source and target images was high. Therefore, the extent to which data distributions of the source and target images match should be considered to select the best unsupervised domain adaptation model to generate reliable classification results.

Image Matching for Orthophotos by Using HRNet Model (HRNet 모델을 이용한 항공정사영상간 영상 매칭)

  • Seong, Seonkyeong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_1
    • /
    • pp.597-608
    • /
    • 2022
  • Remotely sensed data have been used in various fields, such as disasters, agriculture, urban planning, and the military. Recently, the demand for the multitemporal dataset with the high-spatial-resolution has increased. This manuscript proposed an automatic image matching algorithm using a deep learning technique to utilize a multitemporal remotely sensed dataset. The proposed deep learning model was based on High Resolution Net (HRNet), widely used in image segmentation. In this manuscript, denseblock was added to calculate the correlation map between images effectively and to increase learning efficiency. The training of the proposed model was performed using the multitemporal orthophotos of the National Geographic Information Institute (NGII). In order to evaluate the performance of image matching using a deep learning model, a comparative evaluation was performed. As a result of the experiment, the average horizontal error of the proposed algorithm based on 80% of the image matching rate was 3 pixels. At the same time, that of the Zero Normalized Cross-Correlation (ZNCC) was 25 pixels. In particular, it was confirmed that the proposed method is effective even in mountainous and farmland areas where the image changes according to vegetation growth. Therefore, it is expected that the proposed deep learning algorithm can perform relative image registration and image matching of a multitemporal remote sensed dataset.

A study on performance improvement considering the balance between corpus in Neural Machine Translation (인공신경망 기계번역에서 말뭉치 간의 균형성을 고려한 성능 향상 연구)

  • Park, Chanjun;Park, Kinam;Moon, Hyeonseok;Eo, Sugyeong;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.5
    • /
    • pp.23-29
    • /
    • 2021
  • Recent deep learning-based natural language processing studies are conducting research to improve performance by training large amounts of data from various sources together. However, there is a possibility that the methodology of learning by combining data from various sources into one may prevent performance improvement. In the case of machine translation, data deviation occurs due to differences in translation(liberal, literal), style(colloquial, written, formal, etc.), domains, etc. Combining these corpora into one for learning can adversely affect performance. In this paper, we propose a new Corpus Weight Balance(CWB) method that considers the balance between parallel corpora in machine translation. As a result of the experiment, the model trained with balanced corpus showed better performance than the existing model. In addition, we propose an additional corpus construction process that enables coexistence with the human translation market, which can build high-quality parallel corpus even with a monolingual corpus.

MF sampler: Sampling method for improving the performance of a video based fashion retrieval model (MF sampler: 동영상 기반 패션 검색 모델의 성능 향상을 위한 샘플링 방법)

  • Baek, Sanghun;Park, Jonghyuk
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.4
    • /
    • pp.329-346
    • /
    • 2022
  • Recently, as the market for short form videos (Instagram, TikTok, YouTube) on social media has gradually increased, research using them is actively being conducted in the artificial intelligence field. A representative research field is Video to Shop, which detects fashion products in videos and searches for product images. In such a video-based artificial intelligence model, product features are extracted using convolution operations. However, due to the limitation of computational resources, extracting features using all the frames in the video is practically impossible. For this reason, existing studies have improved the model's performance by sampling only a part of the entire frame or developing a sampling method using the subject's characteristics. In the existing Video to Shop study, when sampling frames, some frames are randomly sampled or sampled at even intervals. However, this sampling method degrades the performance of the fashion product search model while sampling noise frames where the product does not exist. Therefore, this paper proposes a sampling method MF (Missing Fashion items on frame) sampler that removes noise frames and improves the performance of the search model. MF sampler has improved the problem of resource limitations by developing a keyframe mechanism. In addition, the performance of the search model is improved through noise frame removal using the noise detection model. As a result of the experiment, it was confirmed that the proposed method improves the model's performance and helps the model training to be effective.

Performance Analysis of Trading Strategy using Gradient Boosting Machine Learning and Genetic Algorithm

  • Jang, Phil-Sik
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.11
    • /
    • pp.147-155
    • /
    • 2022
  • In this study, we developed a system to dynamically balance a daily stock portfolio and performed trading simulations using gradient boosting and genetic algorithms. We collected various stock market data from stocks listed on the KOSPI and KOSDAQ markets, including investor-specific transaction data. Subsequently, we indexed the data as a preprocessing step, and used feature engineering to modify and generate variables for training. First, we experimentally compared the performance of three popular gradient boosting algorithms in terms of accuracy, precision, recall, and F1-score, including XGBoost, LightGBM, and CatBoost. Based on the results, in a second experiment, we used a LightGBM model trained on the collected data along with genetic algorithms to predict and select stocks with a high daily probability of profit. We also conducted simulations of trading during the period of the testing data to analyze the performance of the proposed approach compared with the KOSPI and KOSDAQ indices in terms of the CAGR (Compound Annual Growth Rate), MDD (Maximum Draw Down), Sharpe ratio, and volatility. The results showed that the proposed strategies outperformed those employed by the Korean stock market in terms of all performance metrics. Moreover, our proposed LightGBM model with a genetic algorithm exhibited competitive performance in predicting stock price movements.

Sign Language Dataset Built from S. Korean Government Briefing on COVID-19 (대한민국 정부의 코로나 19 브리핑을 기반으로 구축된 수어 데이터셋 연구)

  • Sim, Hohyun;Sung, Horyeol;Lee, Seungjae;Cho, Hyeonjoong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.8
    • /
    • pp.325-330
    • /
    • 2022
  • This paper conducts the collection and experiment of datasets for deep learning research on sign language such as sign language recognition, sign language translation, and sign language segmentation for Korean sign language. There exist difficulties for deep learning research of sign language. First, it is difficult to recognize sign languages since they contain multiple modalities including hand movements, hand directions, and facial expressions. Second, it is the absence of training data to conduct deep learning research. Currently, KETI dataset is the only known dataset for Korean sign language for deep learning. Sign language datasets for deep learning research are classified into two categories: Isolated sign language and Continuous sign language. Although several foreign sign language datasets have been collected over time. they are also insufficient for deep learning research of sign language. Therefore, we attempted to collect a large-scale Korean sign language dataset and evaluate it using a baseline model named TSPNet which has the performance of SOTA in the field of sign language translation. The collected dataset consists of a total of 11,402 image and text. Our experimental result with the baseline model using the dataset shows BLEU-4 score 3.63, which would be used as a basic performance of a baseline model for Korean sign language dataset. We hope that our experience of collecting Korean sign language dataset helps facilitate further research directions on Korean sign language.

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Research on ANN based on Simulated Annealing in Parameter Optimization of Micro-scaled Flow Channels Electrochemical Machining (미세 유동채널의 전기화학적 가공 파라미터 최적화를 위한 어닐링 시뮬레이션에 근거한 인공 뉴럴 네트워크에 관한 연구)

  • Byung-Won Min
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.3
    • /
    • pp.93-98
    • /
    • 2023
  • In this paper, an artificial neural network based on simulated annealing was constructed. The mapping relationship between the parameters of micro-scaled flow channels electrochemical machining and the channel shape was established by training the samples. The depth and width of micro-scaled flow channels electrochemical machining on stainless steel surface were predicted, and the flow channels experiment was carried out with pulse power supply in NaNO3 solution to verify the established network model. The results show that the depth and width of the channel predicted by the simulated annealing artificial neural network with "4-7-2" structure are very close to the experimental values, and the error is less than 5.3%. The predicted and experimental data show that the etching degree in the process of channels electrochemical machining is closely related to voltage and current density. When the voltage is less than 5V, a "small island" is formed in the channel; When the voltage is greater than 40V, the lateral etching of the channel is relatively large, and the "dam" between the channels disappears. When the voltage is 25V, the machining morphology of the channel is the best.

A Comparison of Image Classification System for Building Waste Data based on Deep Learning (딥러닝기반 건축폐기물 이미지 분류 시스템 비교)

  • Jae-Kyung Sung;Mincheol Yang;Kyungnam Moon;Yong-Guk Kim
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.23 no.3
    • /
    • pp.199-206
    • /
    • 2023
  • This study utilizes deep learning algorithms to automatically classify construction waste into three categories: wood waste, plastic waste, and concrete waste. Two models, VGG-16 and ViT (Vision Transformer), which are convolutional neural network image classification algorithms and NLP-based models that sequence images, respectively, were compared for their performance in classifying construction waste. Image data for construction waste was collected by crawling images from search engines worldwide, and 3,000 images, with 1,000 images for each category, were obtained by excluding images that were difficult to distinguish with the naked eye or that were duplicated and would interfere with the experiment. In addition, to improve the accuracy of the models, data augmentation was performed during training with a total of 30,000 images. Despite the unstructured nature of the collected image data, the experimental results showed that VGG-16 achieved an accuracy of 91.5%, and ViT achieved an accuracy of 92.7%. This seems to suggest the possibility of practical application in actual construction waste data management work. If object detection techniques or semantic segmentation techniques are utilized based on this study, more precise classification will be possible even within a single image, resulting in more accurate waste classification