• Title/Summary/Keyword: 이미지 맵

Search Result 205, Processing Time 0.024 seconds

Detection of Zebra-crossing Areas Based on Deep Learning with Combination of SegNet and ResNet (SegNet과 ResNet을 조합한 딥러닝에 기반한 횡단보도 영역 검출)

  • Liang, Han;Seo, Suyoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.141-148
    • /
    • 2021
  • This paper presents a method to detect zebra-crossing using deep learning which combines SegNet and ResNet. For the blind, a safe crossing system is important to know exactly where the zebra-crossings are. Zebra-crossing detection by deep learning can be a good solution to this problem and robotic vision-based assistive technologies sprung up over the past few years, which focused on specific scene objects using monocular detectors. These traditional methods have achieved significant results with relatively long processing times, and enhanced the zebra-crossing perception to a large extent. However, running all detectors jointly incurs a long latency and becomes computationally prohibitive on wearable embedded systems. In this paper, we propose a model for fast and stable segmentation of zebra-crossing from captured images. The model is improved based on a combination of SegNet and ResNet and consists of three steps. First, the input image is subsampled to extract image features and the convolutional neural network of ResNet is modified to make it the new encoder. Second, through the SegNet original up-sampling network, the abstract features are restored to the original image size. Finally, the method classifies all pixels and calculates the accuracy of each pixel. The experimental results prove the efficiency of the modified semantic segmentation algorithm with a relatively high computing speed.

Group-based Adaptive Rendering for 6DoF Immersive Video Streaming (6DoF 몰입형 비디오 스트리밍을 위한 그룹 분할 기반 적응적 렌더링 기법)

  • Lee, Soonbin;Jeong, Jong-Beom;Ryu, Eun-Seok
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.216-227
    • /
    • 2022
  • The MPEG-I (Immersive) group is working on a standardization project for immersive video that provides 6 degrees of freedom (6DoF). The MPEG Immersion Video (MIV) standard technology is intended to provide limited 6DoF based on depth map-based image rendering (DIBR) technique. Many efficient coding methods have been suggested for MIV, but efficient transmission strategies have received little attention in MPEG-I. This paper proposes group-based adaptive rendering method for immersive video streaming. Each group can be transmitted independently using group-based encoding, enabling adaptive transmission depending on the user's viewport. In the rendering process, the proposed method derives weights of group for view synthesis and allocate high quality bitstream according to a given viewport. The proposed method is implemented through the Test Model for Immersive Video (TMIV) test model. The proposed method demonstrates 17.0% Bjontegaard-delta rate (BD-rate) savings on the peak signalto-noise ratio (PSNR) and 14.6% on the Immersive Video PSNR(IV-PSNR) in terms of various end-to-end evaluation metrics in the experiment.

Real-time Segmentation of Black Ice Region in Infrared Road Images

  • Li, Yu-Jie;Kang, Sun-Kyoung;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.2
    • /
    • pp.33-42
    • /
    • 2022
  • In this paper, we proposed a deep learning model based on multi-scale dilated convolution feature fusion for the segmentation of black ice region in road image to send black ice warning to drivers in real time. In the proposed multi-scale dilated convolution feature fusion network, different dilated ratio convolutions are connected in parallel in the encoder blocks, and different dilated ratios are used in different resolution feature maps, and multi-layer feature information are fused together. The multi-scale dilated convolution feature fusion improves the performance by diversifying and expending the receptive field of the network and by preserving detailed space information and enhancing the effectiveness of diated convolutions. The performance of the proposed network model was gradually improved with the increase of the number of dilated convolution branch. The mIoU value of the proposed method is 96.46%, which was higher than the existing networks such as U-Net, FCN, PSPNet, ENet, LinkNet. The parameter was 1,858K, which was 6 times smaller than the existing LinkNet model. From the experimental results of Jetson Nano, the FPS of the proposed method was 3.63, which can realize segmentation of black ice field in real time.

Entropy-Based 6 Degrees of Freedom Extraction for the W-band Synthetic Aperture Radar Image Reconstruction (W-band Synthetic Aperture Radar 영상 복원을 위한 엔트로피 기반의 6 Degrees of Freedom 추출)

  • Hyokbeen Lee;Duk-jin Kim;Junwoo Kim;Juyoung Song
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_1
    • /
    • pp.1245-1254
    • /
    • 2023
  • Significant research has been conducted on the W-band synthetic aperture radar (SAR) system that utilizes the 77 GHz frequency modulation continuous wave (FMCW) radar. To reconstruct the high-resolution W-band SAR image, it is necessary to transform the point cloud acquired from the stereo cameras or the LiDAR in the direction of 6 degrees of freedom (DOF) and apply them to the SAR signal processing. However, there are difficulties in matching images due to the different geometric structures of images acquired from different sensors. In this study, we present the method to extract an optimized depth map by obtaining 6 DOF of the point cloud using a gradient descent method based on the entropy of the SAR image. An experiment was conducted to reconstruct a tree, which is a major road environment object, using the constructed W-band SAR system. The SAR image, reconstructed using the entropy-based gradient descent method, showed a decrease of 53.2828 in mean square error and an increase of 0.5529 in the structural similarity index, compared to SAR images reconstructed from radar coordinates.

Evaluation of Roadmap Image Quality by Parameter Change in Angiography (혈관조영검사에서 매개변수 변화에 따른 Roadmap 영상의 화질평가)

  • Kong, Chang gi;Song, Jong Nam;Han, Jae Bok
    • Journal of the Korean Society of Radiology
    • /
    • v.14 no.1
    • /
    • pp.53-60
    • /
    • 2020
  • The purpose of this study is to identify factors affecting picture quality in Roadmap images, which were studied by varying the dilution rate, collimation field and flow rate of contrast medium. For a quantitative evaluation of the quality of the picture, a 3mm vessel model Water Phantom was self-produced using acrylic, a roadmap image was acquired with a self-produced vascular model Water Phantom, and the SNR(Signal to Noise Ratio) and CNR (Contrast to Noise Ratio) were analyzed. CM:N/S In the study on the change of dilution rate, CM:N/S dilution rate changed to (100%~10%:100%), and the measurement of the roadmap image taken using the vascular model Water Phantom showed that the measurement value of SNR gradually decreased as the N/S dilution rate was increased, and the measurement of CNR was gradually reduced. It was confirmed that the higher the dilution rate of CM:N/S, the lower the SNR and CNR, and also significant image can be obtained at the dilution rate of CM:N/S (100%~70:30%). The study showed the value of SNR and CNR in Roadmap image was increased as the Collimation Field was narrowed to the center of the vascular phantom; the Collimation Field was narrowed to the center of the vessel model by 2cm intervals to 0cm through 12cm. To verify the relationship with Roadmap image and Flow Rate, volume of the autoinjector was kept constant at 15 and the flow rate was gradually increased 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. The value of SNR and CNR of images taken by using water Phantom gradually decreased as the Flow Rate increased, but at Flow Rate 9 and 10, the SNR and CNR value was increase. It was not possible to confirm the relationship with SNR and CNR by ROI mean value and Background mean value. It is considered that further study is needed to evaluate the correlation about Roadmap image and Flow Rate. In conclusion, as the dilution rate of N/S in contrast medium was increased, the value of SNR and CNR was decreased. The narrower the Collimation Field, the higher image quality by increasing value of SNR and CNR. However, it is not confirmed the relationship Roadmap image and Flow Rate. It is considered that appropriate contrast medium concentration to minimize the effects of kidney and proper Collimation Field to improve contrast of image and reduce exposure X-ray during procedure is needed.