• 제목/요약/키워드: Image Learning

검색결과 3,114건 처리시간 0.025초

Adversarial Learning-Based Image Correction Methodology for Deep Learning Analysis of Heterogeneous Images (이질적 이미지의 딥러닝 분석을 위한 적대적 학습기반 이미지 보정 방법론)

  • Kim, Junwoo;Kim, Namgyu
    • KIPS Transactions on Software and Data Engineering
    • /
    • 제10권11호
    • /
    • pp.457-464
    • /
    • 2021
  • The advent of the big data era has enabled the rapid development of deep learning that learns rules by itself from data. In particular, the performance of CNN algorithms has reached the level of self-adjusting the source data itself. However, the existing image processing method only deals with the image data itself, and does not sufficiently consider the heterogeneous environment in which the image is generated. Images generated in a heterogeneous environment may have the same information, but their features may be expressed differently depending on the photographing environment. This means that not only the different environmental information of each image but also the same information are represented by different features, which may degrade the performance of the image analysis model. Therefore, in this paper, we propose a method to improve the performance of the image color constancy model based on Adversarial Learning that uses image data generated in a heterogeneous environment simultaneously. Specifically, the proposed methodology operates with the interaction of the 'Domain Discriminator' that predicts the environment in which the image was taken and the 'Illumination Estimator' that predicts the lighting value. As a result of conducting an experiment on 7,022 images taken in heterogeneous environments to evaluate the performance of the proposed methodology, the proposed methodology showed superior performance in terms of Angular Error compared to the existing methods.

Collaborative Similarity Metric Learning for Semantic Image Annotation and Retrieval

  • Wang, Bin;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제7권5호
    • /
    • pp.1252-1271
    • /
    • 2013
  • Automatic image annotation has become an increasingly important research topic owing to its key role in image retrieval. Simultaneously, it is highly challenging when facing to large-scale dataset with large variance. Practical approaches generally rely on similarity measures defined over images and multi-label prediction methods. More specifically, those approaches usually 1) leverage similarity measures predefined or learned by optimizing for ranking or annotation, which might be not adaptive enough to datasets; and 2) predict labels separately without taking the correlation of labels into account. In this paper, we propose a method for image annotation through collaborative similarity metric learning from dataset and modeling the label correlation of the dataset. The similarity metric is learned by simultaneously optimizing the 1) image ranking using structural SVM (SSVM), and 2) image annotation using correlated label propagation, with respect to the similarity metric. The learned similarity metric, fully exploiting the available information of datasets, would improve the two collaborative components, ranking and annotation, and sequentially the retrieval system itself. We evaluated the proposed method on Corel5k, Corel30k and EspGame databases. The results for annotation and retrieval show the competitive performance of the proposed method.

Car detection area segmentation using deep learning system

  • Dong-Jin Kwon;Sang-hoon Lee
    • International journal of advanced smart convergence
    • /
    • 제12권4호
    • /
    • pp.182-189
    • /
    • 2023
  • A recently research, object detection and segmentation have emerged as crucial technologies widely utilized in various fields such as autonomous driving systems, surveillance and image editing. This paper proposes a program that utilizes the QT framework to perform real-time object detection and precise instance segmentation by integrating YOLO(You Only Look Once) and Mask R CNN. This system provides users with a diverse image editing environment, offering features such as selecting specific modes, drawing masks, inspecting detailed image information and employing various image processing techniques, including those based on deep learning. The program advantage the efficiency of YOLO to enable fast and accurate object detection, providing information about bounding boxes. Additionally, it performs precise segmentation using the functionalities of Mask R CNN, allowing users to accurately distinguish and edit objects within images. The QT interface ensures an intuitive and user-friendly environment for program control and enhancing accessibility. Through experiments and evaluations, our proposed system has been demonstrated to be effective in various scenarios. This program provides convenience and powerful image processing and editing capabilities to both beginners and experts, smoothly integrating computer vision technology. This paper contributes to the growth of the computer vision application field and showing the potential to integrate various image processing algorithms on a user-friendly platform

Strawberry Pests and Diseases Detection Technique Optimized for Symptoms Using Deep Learning Algorithm (딥러닝을 이용한 병징에 최적화된 딸기 병충해 검출 기법)

  • Choi, Young-Woo;Kim, Na-eun;Paudel, Bhola;Kim, Hyeon-tae
    • Journal of Bio-Environment Control
    • /
    • 제31권3호
    • /
    • pp.255-260
    • /
    • 2022
  • This study aimed to develop a service model that uses a deep learning algorithm for detecting diseases and pests in strawberries through image data. In addition, the pest detection performance of deep learning models was further improved by proposing segmented image data sets specialized in disease and pest symptoms. The CNN-based YOLO deep learning model was selected to enhance the existing R-CNN-based model's slow learning speed and inference speed. A general image data set and a proposed segmented image dataset was prepared to train the pest and disease detection model. When the deep learning model was trained with the general training data set, the pest detection rate was 81.35%, and the pest detection reliability was 73.35%. On the other hand, when the deep learning model was trained with the segmented image dataset, the pest detection rate increased to 91.93%, and detection reliability was increased to 83.41%. This study concludes with the possibility of improving the performance of the deep learning model by using a segmented image dataset instead of a general image dataset.

Research on Local and Global Infrared Image Pre-Processing Methods for Deep Learning Based Guided Weapon Target Detection

  • Jae-Yong Baek;Dae-Hyeon Park;Hyuk-Jin Shin;Yong-Sang Yoo;Deok-Woong Kim;Du-Hwan Hur;SeungHwan Bae;Jun-Ho Cheon;Seung-Hwan Bae
    • Journal of the Korea Society of Computer and Information
    • /
    • 제29권7호
    • /
    • pp.41-51
    • /
    • 2024
  • In this paper, we explore the enhancement of target detection accuracy in the guided weapon using deep learning object detection on infrared (IR) images. Due to the characteristics of IR images being influenced by factors such as time and temperature, it's crucial to ensure a consistent representation of object features in various environments when training the model. A simple way to address this is by emphasizing the features of target objects and reducing noise within the infrared images through appropriate pre-processing techniques. However, in previous studies, there has not been sufficient discussion on pre-processing methods in learning deep learning models based on infrared images. In this paper, we aim to investigate the impact of image pre-processing techniques on infrared image-based training for object detection. To achieve this, we analyze the pre-processing results on infrared images that utilized global or local information from the video and the image. In addition, in order to confirm the impact of images converted by each pre-processing technique on object detector training, we learn the YOLOX target detector for images processed by various pre-processing methods and analyze them. In particular, the results of the experiments using the CLAHE (Contrast Limited Adaptive Histogram Equalization) shows the highest detection accuracy with a mean average precision (mAP) of 81.9%.

A Study on the Evaluation of Optimal Program Applicability for Face Recognition Using Machine Learning (기계학습을 이용한 얼굴 인식을 위한 최적 프로그램 적용성 평가에 대한 연구)

  • Kim, Min-Ho;Jo, Ki-Yong;You, Hee-Won;Lee, Jung-Yeal;Baek, Un-Bae
    • Korean Journal of Artificial Intelligence
    • /
    • 제5권1호
    • /
    • pp.10-17
    • /
    • 2017
  • This study is the first attempt to raise face recognition ability through machine learning algorithm and apply to CRM's information gathering, analysis and application. In other words, through face recognition of VIP customer in distribution field, we can proceed more prompt and subdivided customized services. The interest in machine learning, which is used to implement artificial intelligence, has increased, and it has become an age to automate it by using machine learning beyond the way that a person directly models an object recognition process. Among them, Deep Learning is evaluated as an advanced technology that shows amazing performance in various fields, and is applied to various fields of image recognition. Face recognition, which is widely used in real life, has been developed to recognize criminals' faces and catch criminals. In this study, two image analysis models, TF-SLIM and Inception-V3, which are likely to be used for criminal face recognition, were selected, analyzed, and implemented. As an evaluation criterion, the image recognition model was evaluated based on the accuracy of the face recognition program which is already being commercialized. In this experiment, it was evaluated that the recognition accuracy was good when the accuracy of the image classification was more than 90%. A limit of our study which is a way to raise face recognition is left as a further research subjects.

Research on Deep Learning Performance Improvement for Similar Image Classification (유사 이미지 분류를 위한 딥 러닝 성능 향상 기법 연구)

  • Lim, Dong-Jin;Kim, Taehong
    • The Journal of the Korea Contents Association
    • /
    • 제21권8호
    • /
    • pp.1-9
    • /
    • 2021
  • Deep learning in computer vision has made accelerated improvement over a short period but large-scale learning data and computing power are still essential that required time-consuming trial and error tasks are involved to derive an optimal network model. In this study, we propose a similar image classification performance improvement method based on CR (Confusion Rate) that considers only the characteristics of the data itself regardless of network optimization or data reinforcement. The proposed method is a technique that improves the performance of the deep learning model by calculating the CRs for images in a dataset with similar characteristics and reflecting it in the weight of the Loss Function. Also, the CR-based recognition method is advantageous for image identification with high similarity because it enables image recognition in consideration of similarity between classes. As a result of applying the proposed method to the Resnet18 model, it showed a performance improvement of 0.22% in HanDB and 3.38% in Animal-10N. The proposed method is expected to be the basis for artificial intelligence research using noisy labeled data accompanying large-scale learning data.

Transfer Learning-based Generated Synthetic Images Identification Model (전이 학습 기반의 생성 이미지 판별 모델 설계)

  • Chaewon Kim;Sungyeon Yoon;Myeongeun Han;Minseo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • 제10권2호
    • /
    • pp.465-470
    • /
    • 2024
  • The advancement of AI-based image generation technology has resulted in the creation of various images, emphasizing the need for technology capable of accurately discerning them. The amount of generated image data is limited, and to achieve high performance with a limited dataset, this study proposes a model for discriminating generated images using transfer learning. Applying pre-trained models from the ImageNet dataset directly to the CIFAKE input dataset, we reduce training time cost followed by adding three hidden layers and one output layer to fine-tune the model. The modeling results revealed an improvement in the performance of the model when adjusting the final layer. Using transfer learning and then adjusting layers close to the output layer, small image data-related accuracy issues can be reduced and generated images can be classified.

Learning Free Energy Kernel for Image Retrieval

  • Wang, Cungang;Wang, Bin;Zheng, Liping
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제8권8호
    • /
    • pp.2895-2912
    • /
    • 2014
  • Content-based image retrieval has been the most important technique for managing huge amount of images. The fundamental yet highly challenging problem in this field is how to measure the content-level similarity based on the low-level image features. The primary difficulties lie in the great variance within images, e.g. background, illumination, viewpoint and pose. Intuitively, an ideal similarity measure should be able to adapt the data distribution, discover and highlight the content-level information, and be robust to those variances. Motivated by these observations, we in this paper propose a probabilistic similarity learning approach. We first model the distribution of low-level image features and derive the free energy kernel (FEK), i.e., similarity measure, based on the distribution. Then, we propose a learning approach for the derived kernel, under the criterion that the kernel outputs high similarity for those images sharing the same class labels and output low similarity for those without the same label. The advantages of the proposed approach, in comparison with previous approaches, are threefold. (1) With the ability inherited from probabilistic models, the similarity measure can well adapt to data distribution. (2) Benefitting from the content-level hidden variables within the probabilistic models, the similarity measure is able to capture content-level cues. (3) It fully exploits class label in the supervised learning procedure. The proposed approach is extensively evaluated on two well-known databases. It achieves highly competitive performance on most experiments, which validates its advantages.

Defect Classification of Cross-section of Additive Manufacturing Using Image-Labeling (이미지 라벨링을 이용한 적층제조 단면의 결함 분류)

  • Lee, Jeong-Seong;Choi, Byung-Joo;Lee, Moon-Gu;Kim, Jung-Sub;Lee, Sang-Won;Jeon, Yong-Ho
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • 제19권7호
    • /
    • pp.7-15
    • /
    • 2020
  • Recently, the fourth industrial revolution has been presented as a new paradigm and additive manufacturing (AM) has become one of the most important topics. For this reason, process monitoring for each cross-sectional layer of additive metal manufacturing is important. Particularly, deep learning can train a machine to analyze, optimize, and repair defects. In this paper, image classification is proposed by learning images of defects in the metal cross sections using the convolution neural network (CNN) image labeling algorithm. Defects were classified into three categories: crack, porosity, and hole. To overcome a lack-of-data problem, the amount of learning data was augmented using a data augmentation algorithm. This augmentation algorithm can transform an image to 180 images, increasing the learning accuracy. The number of training and validation images was 25,920 (80 %) and 6,480 (20 %), respectively. An optimized case with a combination of fully connected layers, an optimizer, and a loss function, showed that the model accuracy was 99.7 % and had a success rate of 97.8 % for 180 test images. In conclusion, image labeling was successfully performed and it is expected to be applied to automated AM process inspection and repair systems in the future.