• 제목/요약/키워드: Image Learning

검색결과 3,114건 처리시간 0.028초

Malware Classification using Dynamic Analysis with Deep Learning

  • Asad Amin;Muhammad Nauman Durrani;Nadeem Kafi;Fahad Samad;Abdul Aziz
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.49-62
    • /
    • 2023
  • There has been a rapid increase in the creation and alteration of new malware samples which is a huge financial risk for many organizations. There is a huge demand for improvement in classification and detection mechanisms available today, as some of the old strategies like classification using mac learning algorithms were proved to be useful but cannot perform well in the scalable auto feature extraction scenario. To overcome this there must be a mechanism to automatically analyze malware based on the automatic feature extraction process. For this purpose, the dynamic analysis of real malware executable files has been done to extract useful features like API call sequence and opcode sequence. The use of different hashing techniques has been analyzed to further generate images and convert them into image representable form which will allow us to use more advanced classification approaches to classify huge amounts of images using deep learning approaches. The use of deep learning algorithms like convolutional neural networks enables the classification of malware by converting it into images. These images when fed into the CNN after being converted into the grayscale image will perform comparatively well in case of dynamic changes in malware code as image samples will be changed by few pixels when classified based on a greyscale image. In this work, we used VGG-16 architecture of CNN for experimentation.

다단계 전이 학습을 이용한 유방암 초음파 영상 분류 응용 (Proper Base-model and Optimizer Combination Improves Transfer Learning Performance for Ultrasound Breast Cancer Classification)

  • 겔란 아야나;박진형;최세운
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.655-657
    • /
    • 2021
  • 인공지능 알고리즘을 이용한 유방암의 조기진단에 관련된 연구는 최근들어 활발하게 진행되고 있으나, 사용자의 목적에 맞는 처리속도 및 정확도 등에 다양한 한계점을 보인다. 이러한 문제를 해결하기 위해, 본 논문에서는 ImageNet에서 학습된 ResNet 모델을 현미경 기반 암세포 이미지에서 활용이 가능한 다단계 전이 학습을 제안하고, 이를 다시 전이 학습하여 초음파 유방암 영상을 양성 및 악성으로 분류하는 실험을 진행하였다. 제안된 다단계 전이 학습 알고리즘은 초음파 유방암 영상을 분류하였을 때 96% 이상의 정확도를 보였으며, 향후 암 세포주 및 실시간 영상처리 등의 추가를 통해 보다 높은 활용도와 정확도를 보일 것으로 기대한다.

  • PDF

Facial Feature Based Image-to-Image Translation Method

  • Kang, Shinjin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권12호
    • /
    • pp.4835-4848
    • /
    • 2020
  • The recent expansion of the digital content market is increasing the technical demand for various facial image transformations within the virtual environment. The recent image translation technology enables changes between various domains. However, current image-to-image translation techniques do not provide stable performance through unsupervised learning, especially for shape learning in the face transition field. This is because the face is a highly sensitive feature, and the quality of the resulting image is significantly affected, especially if the transitions in the eyes, nose, and mouth are not effectively performed. We herein propose a new unsupervised method that can transform an in-wild face image into another face style through radical transformation. Specifically, the proposed method applies two face-specific feature loss functions for a generative adversarial network. The proposed technique shows that stable domain conversion to other domains is possible while maintaining the image characteristics in the eyes, nose, and mouth.

Discriminative Manifold Learning Network using Adversarial Examples for Image Classification

  • Zhang, Yuan;Shi, Biming
    • Journal of Electrical Engineering and Technology
    • /
    • 제13권5호
    • /
    • pp.2099-2106
    • /
    • 2018
  • This study presents a novel approach of discriminative feature vectors based on manifold learning using nonlinear dimension reduction (DR) technique to improve loss function, and combine with the Adversarial examples to regularize the object function for image classification. The traditional convolutional neural networks (CNN) with many new regularization approach has been successfully used for image classification tasks, and it achieved good results, hence it costs a lot of Calculated spacing and timing. Significantly, distrinct from traditional CNN, we discriminate the feature vectors for objects without empirically-tuned parameter, these Discriminative features intend to remain the lower-dimensional relationship corresponding high-dimension manifold after projecting the image feature vectors from high-dimension to lower-dimension, and we optimize the constrains of the preserving local features based on manifold, which narrow the mapped feature information from the same class and push different class away. Using Adversarial examples, improved loss function with additional regularization term intends to boost the Robustness and generalization of neural network. experimental results indicate that the approach based on discriminative feature of manifold learning is not only valid, but also more efficient in image classification tasks. Furthermore, the proposed approach achieves competitive classification performances for three benchmark datasets : MNIST, CIFAR-10, SVHN.

딥러닝 학습을 위한 초분광 영상 데이터 관리 소프트웨어 개발 (Management Software Development of Hyper Spectral Image Data for Deep Learning Training)

  • 이다빈;김홍락;박진호;황선정;신정섭
    • 한국인터넷방송통신학회논문지
    • /
    • 제21권6호
    • /
    • pp.111-116
    • /
    • 2021
  • 초분광 영상은 적외선 영역의 전자기파 대역을 수백 개의 파장으로 나누어 영상화한 데이터로 다양한 분야에서 물체를 찾거나 분류하는 것에 활용된다. 최근에는 딥러닝을 사용하여 분류하는 방법이 주목받고 있지만 초분광 영상 데이터의 특성으로 인해 초분광 영상을 학습 데이터로 사용하기 위해서는 기존의 가시광 영상과는 다른 처리 기법이 필요하다. 이를 위해 초분광 큐브에서 특정 파장의 영상을 선택하여 Ground Truth 작업을 수행하고 환경정보를 포함하여 데이터를 관리하는 소프트웨어를 개발하였다. 본 논문에서는 해당 소프트웨어의 구성과 기능에 대하여 설명한다.

Evaluation of Deep Learning Model for Scoliosis Pre-Screening Using Preprocessed Chest X-ray Images

  • Min Gu Jang;Jin Woong Yi;Hyun Ju Lee;Ki Sik Tae
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권4호
    • /
    • pp.293-301
    • /
    • 2023
  • Scoliosis is a three-dimensional deformation of the spine that is a deformity induced by physical or disease-related causes as the spine is rotated abnormally. Early detection has a significant influence on the possibility of nonsurgical treatment. To train a deep learning model with preprocessed images and to evaluate the results with and without data augmentation to enable the diagnosis of scoliosis based only on a chest X-ray image. The preprocessed images in which only the spine, rib contours, and some hard tissues were left from the original chest image, were used for learning along with the original images, and three CNN(Convolutional Neural Networks) models (VGG16, ResNet152, and EfficientNet) were selected to proceed with training. The results obtained by training with the preprocessed images showed a superior accuracy to those obtained by training with the original image. When the scoliosis image was added through data augmentation, the accuracy was further improved, ultimately achieving a classification accuracy of 93.56% with the ResNet152 model using test data. Through supplementation with future research, the method proposed herein is expected to allow the early diagnosis of scoliosis as well as cost reduction by reducing the burden of additional radiographic imaging for disease detection.

임베디드 보드에서 영상 처리 및 딥러닝 기법을 혼용한 돼지 탐지 정확도 개선 (Accuracy Improvement of Pig Detection using Image Processing and Deep Learning Techniques on an Embedded Board)

  • 유승현;손승욱;안한세;이세준;백화평;정용화;박대희
    • 한국멀티미디어학회논문지
    • /
    • 제25권4호
    • /
    • pp.583-599
    • /
    • 2022
  • Although the object detection accuracy with a single image has been significantly improved with the advance of deep learning techniques, the detection accuracy for pig monitoring is challenged by occlusion problems due to a complex structure of a pig room such as food facility. These detection difficulties with a single image can be mitigated by using a video data. In this research, we propose a method in pig detection for video monitoring environment with a static camera. That is, by using both image processing and deep learning techniques, we can recognize a complex structure of a pig room and this information of the pig room can be utilized for improving the detection accuracy of pigs in the monitored pig room. Furthermore, we reduce the execution time overhead by applying a pruning technique for real-time video monitoring on an embedded board. Based on the experiment results with a video data set obtained from a commercial pig farm, we confirmed that the pigs could be detected more accurately in real-time, even on an embedded board.

딥러닝 기반 농경지 속성분류를 위한 TIF 이미지와 ECW 이미지 간 정확도 비교 연구 (A Study on the Attributes Classification of Agricultural Land Based on Deep Learning Comparison of Accuracy between TIF Image and ECW Image)

  • 김지영;위성승
    • 한국농공학회논문집
    • /
    • 제65권6호
    • /
    • pp.15-22
    • /
    • 2023
  • In this study, We conduct a comparative study of deep learning-based classification of agricultural field attributes using Tagged Image File (TIF) and Enhanced Compression Wavelet (ECW) images. The goal is to interpret and classify the attributes of agricultural fields by analyzing the differences between these two image formats. "FarmMap," initiated by the Ministry of Agriculture, Food and Rural Affairs in 2014, serves as the first digital map of agricultural land in South Korea. It comprises attributes such as paddy, field, orchard, agricultural facility and ginseng cultivation areas. For the purpose of comparing deep learning-based agricultural attribute classification, we consider the location and class information of objects, as well as the attribute information of FarmMap. We utilize the ResNet-50 instance segmentation model, which is suitable for this task, to conduct simulated experiments. The comparison of agricultural attribute classification between the two images is measured in terms of accuracy. The experimental results indicate that the accuracy of TIF images is 90.44%, while that of ECW images is 91.72%. The ECW image model demonstrates approximately 1.28% higher accuracy. However, statistical validation, specifically Wilcoxon rank-sum tests, did not reveal a significant difference in accuracy between the two images.

Facial Expression Recognition through Self-supervised Learning for Predicting Face Image Sequence

  • Yoon, Yeo-Chan;Kim, Soo Kyun
    • 한국컴퓨터정보학회논문지
    • /
    • 제27권9호
    • /
    • pp.41-47
    • /
    • 2022
  • 본 논문에서는 자동표정인식을 위하여 얼굴 이미지 배열의 가운데 이미지를 예측하는 새롭고 간단한 자기주도학습 방법을 제안한다. 자동표정인식은 딥러닝 모델을 통해 높은 성능을 달성할 수 있으나 일반적으로 큰 비용과 시간이 투자된 대용량의 데이터 세트가 필요하고, 데이터 세트의 크기와 알고리즘의 성능이 비례한다. 제안하는 방법은 추가적인 데이터 세트 구축 없이 기존의 데이터 세트를 활용하여 자기주도학습을 통해 얼굴의 잠재적인 심층표현방법을 학습하고 학습된 파라미터를 전이시켜 자동표정인식의 성능을 향상한다. 제안한 방법은 CK+와 AFEW 8.0 두가지 데이터 세트에 대하여 높은 성능 향상을 보여주었고, 간단한 방법으로 큰 효과를 얻을 수 있음을 보여주었다.

심층 강화학습을 이용한 디지털트윈 및 시각적 객체 추적 (Digital Twin and Visual Object Tracking using Deep Reinforcement Learning)

  • 박진혁;;최필주;이석환;권기룡
    • 한국멀티미디어학회논문지
    • /
    • 제25권2호
    • /
    • pp.145-156
    • /
    • 2022
  • Nowadays, the complexity of object tracking models among hardware applications has become a more in-demand duty to complete in various indeterminable environment tracking situations with multifunctional algorithm skills. In this paper, we propose a virtual city environment using AirSim (Aerial Informatics and Robotics Simulation - AirSim, CityEnvironment) and use the DQN (Deep Q-Learning) model of deep reinforcement learning model in the virtual environment. The proposed object tracking DQN network observes the environment using a deep reinforcement learning model that receives continuous images taken by a virtual environment simulation system as input to control the operation of a virtual drone. The deep reinforcement learning model is pre-trained using various existing continuous image sets. Since the existing various continuous image sets are image data of real environments and objects, it is implemented in 3D to track virtual environments and moving objects in them.