• Title/Summary/Keyword: Image Deep Learning

Search Result 1,797, Processing Time 0.029 seconds

Object detection technology trend and development direction using deep learning

  • Kwak, NaeJoung;Kim, DongJu
    • International Journal of Advanced Culture Technology
    • /
    • v.8 no.4
    • /
    • pp.119-128
    • /
    • 2020
  • Object detection is an important field of computer vision and is applied to applications such as security, autonomous driving, and face recognition. Recently, as the application of artificial intelligence technology including deep learning has been applied in various fields, it has become a more powerful tool that can learn meaningful high-level, deeper features, solving difficult problems that have not been solved. Therefore, deep learning techniques are also being studied in the field of object detection, and algorithms with excellent performance are being introduced. In this paper, a deep learning-based object detection algorithm used to detect multiple objects in an image is investigated, and future development directions are presented.

A Review of 3D Object Tracking Methods Using Deep Learning (딥러닝 기술을 이용한 3차원 객체 추적 기술 리뷰)

  • Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.1
    • /
    • pp.30-37
    • /
    • 2021
  • Accurate 3D object tracking with camera images is a key enabling technology for augmented reality applications. Motivated by the impressive success of convolutional neural networks (CNNs) in computer vision tasks such as image classification, object detection, image segmentation, recent studies for 3D object tracking have focused on leveraging deep learning. In this paper, we review deep learning approaches for 3D object tracking. We describe key methods in this field and discuss potential future research directions.

Synthetic Image Generation for Military Vehicle Detection (군용물체탐지 연구를 위한 가상 이미지 데이터 생성)

  • Se-Yoon Oh;Hunmin Yang
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.26 no.5
    • /
    • pp.392-399
    • /
    • 2023
  • This research paper investigates the effectiveness of using computer graphics(CG) based synthetic data for deep learning in military vehicle detection. In particular, we explore the use of synthetic image generation techniques to train deep neural networks for object detection tasks. Our approach involves the generation of a large dataset of synthetic images of military vehicles, which is then used to train a deep learning model. The resulting model is then evaluated on real-world images to measure its effectiveness. Our experimental results show that synthetic training data alone can achieve effective results in object detection. Our findings demonstrate the potential of CG-based synthetic data for deep learning and suggest its value as a tool for training models in a variety of applications, including military vehicle detection.

Deep Learning-Based Low-Light Imaging Considering Image Signal Processing

  • Minsu, Kwon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.2
    • /
    • pp.19-25
    • /
    • 2023
  • In this paper, we propose a method for improving raw images captured in a low light condition based on deep learning considering the image signal processing. In the case of a smart phone camera, compared to a DSLR camera, the size of a lens or sensor is limited, so the noise increases and the reduces the quality of images in low light conditions. Existing deep learning-based low-light image processing methods create unnatural images in some cases since they do not consider the lens shading effect and white balance, which are major factors in the image signal processing. In this paper, pixel distances from the image center and channel average values are used to consider the lens shading effect and white balance with a deep learning model. Experiments with low-light images taken with a smart phone demonstrate that the proposed method achieves a higher peak signal to noise ratio and structural similarity index measure than the existing method by creating high-quality low-light images.

A Study on the Liver and Tumor Segmentation and Hologram Visualization of CT Images Using Deep Learning (딥러닝을 이용한 CT 영상의 간과 종양 분할과 홀로그램 시각화 기법 연구)

  • Kim, Dae Jin;Kim, Young Jae;Jeon, Youngbae;Hwang, Tae-sik;Choi, Seok Won;Baek, Jeong-Heum;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.5
    • /
    • pp.757-768
    • /
    • 2022
  • In this paper, we proposed a system that visualizes a hologram device in 3D by utilizing the CT image segmentation function based on artificial intelligence deep learning. The input axial CT medical image is converted into Sagittal and Coronal, and the input image and the converted image are divided into 3D volumes using ResUNet, a deep learning model. In addition, the volume is created by segmenting the tumor region in the segmented liver image. Each result is integrated into one 3D volume, displayed in a medical image viewer, and converted into a video. When the converted video is transmitted to the hologram device and output from the device, a 3D image with a sense of space can be checked. As for the performance of the deep learning model, in Axial, the basic input image, DSC showed 95.0% performance in liver region segmentation and 67.5% in liver tumor region segmentation. If the system is applied to a real-world care environment, additional physical contact is not required, making it safer for patients to explain changes before and after surgery more easily. In addition, it will provide medical staff with information on liver and liver tumors necessary for treatment or surgery in a three-dimensional manner, and help patients manage them after surgery by comparing and observing the liver before and after liver resection.

Deep Learning-Based Computed Tomography Image Standardization to Improve Generalizability of Deep Learning-Based Hepatic Segmentation

  • Seul Bi Lee;Youngtaek Hong;Yeon Jin Cho;Dawun Jeong;Jina Lee;Soon Ho Yoon;Seunghyun Lee;Young Hun Choi;Jung-Eun Cheon
    • Korean Journal of Radiology
    • /
    • v.24 no.4
    • /
    • pp.294-304
    • /
    • 2023
  • Objective: We aimed to investigate whether image standardization using deep learning-based computed tomography (CT) image conversion would improve the performance of deep learning-based automated hepatic segmentation across various reconstruction methods. Materials and Methods: We collected contrast-enhanced dual-energy CT of the abdomen that was obtained using various reconstruction methods, including filtered back projection, iterative reconstruction, optimum contrast, and monoenergetic images with 40, 60, and 80 keV. A deep learning based image conversion algorithm was developed to standardize the CT images using 142 CT examinations (128 for training and 14 for tuning). A separate set of 43 CT examinations from 42 patients (mean age, 10.1 years) was used as the test data. A commercial software program (MEDIP PRO v2.0.0.0, MEDICALIP Co. Ltd.) based on 2D U-NET was used to create liver segmentation masks with liver volume. The original 80 keV images were used as the ground truth. We used the paired t-test to compare the segmentation performance in the Dice similarity coefficient (DSC) and difference ratio of the liver volume relative to the ground truth volume before and after image standardization. The concordance correlation coefficient (CCC) was used to assess the agreement between the segmented liver volume and ground-truth volume. Results: The original CT images showed variable and poor segmentation performances. The standardized images achieved significantly higher DSCs for liver segmentation than the original images (DSC [original, 5.40%-91.27%] vs. [standardized, 93.16%-96.74%], all P < 0.001). The difference ratio of liver volume also decreased significantly after image conversion (original, 9.84%-91.37% vs. standardized, 1.99%-4.41%). In all protocols, CCCs improved after image conversion (original, -0.006-0.964 vs. standardized, 0.990-0.998). Conclusion: Deep learning-based CT image standardization can improve the performance of automated hepatic segmentation using CT images reconstructed using various methods. Deep learning-based CT image conversion may have the potential to improve the generalizability of the segmentation network.

Development of Location Image Analysis System design using Deep Learning

  • Jang, Jin-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.1
    • /
    • pp.77-82
    • /
    • 2022
  • The research study was conducted for development of the advanced image analysis service system based on deep learning. CNN(Convolutional Neural Network) is built in this system to extract learning data collected from Google and Instagram. The service gets a place image of Jeju as an input and provides relevant location information of it based on its own learning data. Accuracy improvement plans are applied throughout this study. In conclusion, the implemented system shows about 79.2 of prediction accuracy. When the system has plenty of learning data, it is expected to predict various places more accurately.

Feature Extraction Using Convolutional Neural Networks for Random Translation (랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.3
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Single Image Super Resolution Reconstruction Based on Recursive Residual Convolutional Neural Network

  • Cao, Shuyi;Wee, Seungwoo;Jeong, Jechang
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.98-101
    • /
    • 2019
  • At present, deep convolutional neural networks have made a very important contribution in single-image super-resolution. Through the learning of the neural networks, the features of input images are transformed and combined to establish a nonlinear mapping of low-resolution images to high-resolution images. Some previous methods are difficult to train and take up a lot of memory. In this paper, we proposed a simple and compact deep recursive residual network learning the features for single image super resolution. Global residual learning and local residual learning are used to reduce the problems of training deep neural networks. And the recursive structure controls the number of parameters to save memory. Experimental results show that the proposed method improved image qualities that occur in previous methods.

  • PDF

The Malware Detection Using Deep Learning based R-CNN (딥러닝 기반의 R-CNN을 이용한 악성코드 탐지 기법)

  • Cho, Young-Bok
    • Journal of Digital Contents Society
    • /
    • v.19 no.6
    • /
    • pp.1177-1183
    • /
    • 2018
  • Recent developments in machine learning have attracted a lot of attention for techniques such as machine learning and deep learning that implement artificial intelligence. In this paper, binary malicious code using deep learning based R-CNN is imaged and the feature is extracted from the image to classify the family. In this paper, two steps are used in deep learning to image malicious code using CNN. And classify the characteristics of the family of malicious codes using R-CNN. Generate malicious code as an image, extract features, classify the family, and automatically classify the evolution of malicious code. The detection rate of the proposed method is 93.4% and the accuracy is 98.6%. In addition, the CNN processing speed for image processing of malicious code is 23.3 ms, and the R-CNN processing speed is 4ms to classify one sample.