• Title/Summary/Keyword: ground truth

Search Result 299, Processing Time 0.024 seconds

Unified Labeling and Fine-Grained Verification for Improving Ground-Truth of Malware Analysis (악성코드 분석의 Ground-Truth 향상을 위한 Unified Labeling과 Fine-Grained 검증)

  • Oh, Sang-Jin;Park, Leo-Hyun;Kwon, Tae-Kyoung
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.29 no.3
    • /
    • pp.549-555
    • /
    • 2019
  • According to a recent report by anti-virus vendors, the number of new and modified malware increased exponentially. Therefore, malware analysis research using machine learning has been actively researched in order to replace passive analysis method which has low analysis speed. However, when using supervised learning based machine learning, many studies use low-reliability malware family name provided by the antivirus vendor as the label. In order to solve the problem of low-reliability of malware label, this paper introduces a new labeling technique, "Unified Labeling", and further verifies the malicious behavior similarity through the feature analysis of the fine-grained method. To verify this study, various clustering algorithms were used and compared with existing labeling techniques.

SIFT Image Feature Extraction based on Deep Learning (딥 러닝 기반의 SIFT 이미지 특징 추출)

  • Lee, Jae-Eun;Moon, Won-Jun;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.234-242
    • /
    • 2019
  • In this paper, we propose a deep neural network which extracts SIFT feature points by determining whether the center pixel of a cropped image is a SIFT feature point. The data set of this network consists of a DIV2K dataset cut into $33{\times}33$ size and uses RGB image unlike SIFT which uses black and white image. The ground truth consists of the RobHess SIFT features extracted by setting the octave (scale) to 0, the sigma to 1.6, and the intervals to 3. Based on the VGG-16, we construct an increasingly deep network of 13 to 23 and 33 convolution layers, and experiment with changing the method of increasing the image scale. The result of using the sigmoid function as the activation function of the output layer is compared with the result using the softmax function. Experimental results show that the proposed network not only has more than 99% extraction accuracy but also has high extraction repeatability for distorted images.

KOMPSAT-3A Urban Classification Using Machine Learning Algorithm - Focusing on Yang-jae in Seoul - (기계학습 기법에 따른 KOMPSAT-3A 시가화 영상 분류 - 서울시 양재 지역을 중심으로 -)

  • Youn, Hyoungjin;Jeong, Jongchul
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.6_2
    • /
    • pp.1567-1577
    • /
    • 2020
  • Urban land cover classification is role in urban planning and management. So, it's important to improve classification accuracy on urban location. In this paper, machine learning model, Support Vector Machine (SVM) and Artificial Neural Network (ANN) are proposed for urban land cover classification based on high resolution satellite imagery (KOMPSAT-3A). Satellite image was trained based on 25 m rectangle grid to create training data, and training models used for classifying test area. During the validation process, we presented confusion matrix for each result with 250 Ground Truth Points (GTP). Of the four SVM kernels and the two activation functions ANN, the SVM Polynomial kernel model had the highest accuracy of 86%. In the process of comparing the SVM and ANN using GTP, the SVM model was more effective than the ANN model for KOMPSAT-3A classification. Among the four classes (building, road, vegetation, and bare-soil), building class showed the lowest classification accuracy due to the shadow caused by the high rise building.

Classification of Summer Paddy and Winter Cropping Fields Using Sentinel-2 Images (Sentinel-2 위성영상을 이용한 하계 논벼와 동계작물 재배 필지 분류 및 정확도 평가)

  • Hong, Joo-Pyo;Jang, Seong-Ju;Park, Jin-Seok;Shin, Hyung-Jin;Song, In-Hong
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.51-63
    • /
    • 2022
  • Up-to-date statistics of crop cultivation status is essential for farm land management planning and the advancement in remote sensing technology allows for rapid update of farming information. The objective of this study was to develop a classification model of rice paddy or winter crop fields based on NDWI, NDVI, and HSV indices using Sentinel-2 satellite images. The 18 locations in central Korea were selected as target areas and photographed once for each during summer and winter with a eBee drone to identify ground truth crop cultivation. The NDWI was used to classify summer paddy fields, while the NDVI and HSV were used and compared in identification of winter crop cultivation areas. The summer paddy field classification with the criteria of -0.195

Machine Learning-based Classification of Hyperspectral Imagery

  • Haq, Mohd Anul;Rehman, Ziaur;Ahmed, Ahsan;Khan, Mohd Abdul Rahim
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.193-202
    • /
    • 2022
  • The classification of hyperspectral imagery (HSI) is essential in the surface of earth observation. Due to the continuous large number of bands, HSI data provide rich information about the object of study; however, it suffers from the curse of dimensionality. Dimensionality reduction is an essential aspect of Machine learning classification. The algorithms based on feature extraction can overcome the data dimensionality issue, thereby allowing the classifiers to utilize comprehensive models to reduce computational costs. This paper assesses and compares two HSI classification techniques. The first is based on the Joint Spatial-Spectral Stacked Autoencoder (JSSSA) method, the second is based on a shallow Artificial Neural Network (SNN), and the third is used the SVM model. The performance of the JSSSA technique is better than the SNN classification technique based on the overall accuracy and Kappa coefficient values. We observed that the JSSSA based method surpasses the SNN technique with an overall accuracy of 96.13% and Kappa coefficient value of 0.95. SNN also achieved a good accuracy of 92.40% and a Kappa coefficient value of 0.90, and SVM achieved an accuracy of 82.87%. The current study suggests that both JSSSA and SNN based techniques prove to be efficient methods for hyperspectral classification of snow features. This work classified the labeled/ground-truth datasets of snow in multiple classes. The labeled/ground-truth data can be valuable for applying deep neural networks such as CNN, hybrid CNN, RNN for glaciology, and snow-related hazard applications.

Deep Learning-based Real-time Heart Rate Measurement System Using Mobile Facial Videos (딥러닝 기반의 모바일 얼굴 영상을 이용한 실시간 심박수 측정 시스템)

  • Ji, Yerim;Lim, Seoyeon;Park, Soyeon;Kim, Sangha;Dong, Suh-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1481-1491
    • /
    • 2021
  • Since most biosignals rely on contact-based measurement, there is still a problem in that it is hard to provide convenience to users by applying them to daily life. In this paper, we present a mobile application for estimating heart rate based on a deep learning model. The proposed application measures heart rate by capturing real-time face images in a non-contact manner. We trained a three-dimensional convolutional neural network to predict photoplethysmography (PPG) from face images. The face images used for training were taken in various movements and situations. To evaluate the performance of the proposed system, we used a pulse oximeter to measure a ground truth PPG. As a result, the deviation of the calculated root means square error between the heart rate from remote PPG measured by the proposed system and the heart rate from the ground truth was about 1.14, showing no significant difference. Our findings suggest that heart rate measurement by mobile applications is accurate enough to help manage health during daily life.

A Self-Supervised Detector Scheduler for Efficient Tracking-by-Detection Mechanism

  • Park, Dae-Hyeon;Lee, Seong-Ho;Bae, Seung-Hwan
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.19-28
    • /
    • 2022
  • In this paper, we propose the Detector Scheduler which determines the best tracking-by-detection (TBD) mechanism to perform real-time high-accurate multi-object tracking (MOT). The Detector Scheduler determines whether to run a detector by measuring the dissimilarity of features between different frames. Furthermore, we propose a self-supervision method to learn the Detector Scheduler with tracking results since it is difficult to generate ground truth (GT) for learning the Detector Scheduler. Our proposed self-supervision method generates pseudo labels on whether to run a detector when the dissimilarity of the object cardinality or appearance between frames increases. To this end, we propose the Detector Scheduling Loss to learn the Detector Scheduler. As a result, our proposed method achieves real-time high-accurate multi-object tracking by boosting the overall tracking speed while keeping the tracking accuracy at most.

Panorama Image Stitching Using Sythetic Fisheye Image (Synthetic fisheye 이미지를 이용한 360° 파노라마 이미지 스티칭)

  • Kweon, Hyeok-Joon;Cho, Donghyeon
    • Journal of Broadcast Engineering
    • /
    • v.27 no.1
    • /
    • pp.20-30
    • /
    • 2022
  • Recently, as VR (Virtual Reality) technology has been in the spotlight, 360° panoramic images that can view lively VR contents are attracting a lot of attention. Image stitching technology is a major technology for producing 360° panorama images, and many studies are being actively conducted. Typical stitching algorithms are based on feature point-based image stitching. However, conventional feature point-based image stitching methods have a problem that stitching results are intensely affected by feature points. To solve this problem, deep learning-based image stitching technologies have recently been studied, but there are still many problems when there are few overlapping areas between images or large parallax. In addition, there is a limit to complete supervised learning because labeled ground-truth panorama images cannot be obtained in a real environment. Therefore, we produced three fisheye images with different camera centers and corresponding ground truth image through carla simulator that is widely used in the autonomous driving field. We propose image stitching model that creates a 360° panorama image with the produced fisheye image. The final experimental results are virtual datasets configured similar to the actual environment, verifying stitching results that are strong against various environments and large parallax.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

A study on age distortion reduction in facial expression image generation using StyleGAN Encoder (StyleGAN Encoder를 활용한 표정 이미지 생성에서의 연령 왜곡 감소에 대한 연구)

  • Hee-Yeol Lee;Seung-Ho Lee
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.464-471
    • /
    • 2023
  • In this paper, we propose a method to reduce age distortion in facial expression image generation using StyleGAN Encoder. The facial expression image generation process first creates a face image using StyleGAN Encoder, and changes the expression by applying the learned boundary to the latent vector using SVM. However, when learning the boundary of a smiling expression, age distortion occurs due to changes in facial expression. The smile boundary created in SVM learning for smiling expressions includes wrinkles caused by changes in facial expressions as learning elements, and it is determined that age characteristics were also learned. To solve this problem, the proposed method calculates the correlation coefficient between the smile boundary and the age boundary and uses this to introduce a method of adjusting the age boundary at the smile boundary in proportion to the correlation coefficient. To confirm the effectiveness of the proposed method, the results of an experiment using the FFHQ dataset, a publicly available standard face dataset, and measuring the FID score are as follows. In the smile image, compared to the existing method, the FID score of the smile image generated by the ground truth and the proposed method was improved by about 0.46. In addition, compared to the existing method in the smile image, the FID score of the image generated by StyleGAN Encoder and the smile image generated by the proposed method improved by about 1.031. In non-smile images, compared to the existing method, the FID score of the non-smile image generated by the ground truth and the method proposed in this paper was improved by about 2.25. In addition, compared to the existing method in non-smile images, it was confirmed that the FID score of the image generated by StyleGAN Encoder and the non-smile image generated by the proposed method improved by about 1.908. Meanwhile, as a result of estimating the age of each generated facial expression image and measuring the estimated age and MSE of the image generated with StyleGAN Encoder, compared to the existing method, the proposed method has an average age of about 1.5 in smile images and about 1.63 in non-smile images. Performance was improved, proving the effectiveness of the proposed method.