• Title/Summary/Keyword: image-level fusion

Search Result 84, Processing Time 0.027 seconds

Demineralized Bone Matrix (DBM) as a Bone Void Filler in Lumbar Interbody Fusion : A Prospective Pilot Study of Simultaneous DBM and Autologous Bone Grafts

  • Kim, Bum-Joon;Kim, Se-Hoon;Lee, Haebin;Lee, Seung-Hwan;Kim, Won-Hyung;Jin, Sung-Won
    • Journal of Korean Neurosurgical Society
    • /
    • v.60 no.2
    • /
    • pp.225-231
    • /
    • 2017
  • Objective : Solid bone fusion is an essential process in spinal stabilization surgery. Recently, as several minimally invasive spinal surgeries have developed, a need of artificial bone substitutes such as demineralized bone matrix (DBM), has arisen. We investigated the in vivo bone growth rate of DBM as a bone void filler compared to a local autologous bone grafts. Methods : From April 2014 to August 2015, 20 patients with a one or two-level spinal stenosis were included. A posterior lumbar interbody fusion using two cages and pedicle screw fixation was performed for every patient, and each cage was packed with autologous local bone and DBM. Clinical outcomes were assessed using the Numeric Rating Scale (NRS) of leg pain and back pain and the Korean Oswestry Disability Index (K-ODI). Clinical outcome parameters and range of motion (ROM) of the operated level were collected preoperatively and at 3 months, 6 months, and 1 year postoperatively. Computed tomography was performed 1 year after fusion surgery and bone growth of the autologous bone grafts and DBM were analyzed by ImageJ software. Results : Eighteen patients completed 1 year of follow-up, including 10 men and 8 women, and the mean age was 56.4 (32-71). The operated level ranged from L3/4 to L5/S1. Eleven patients had single level and 7 patients had two-level repairs. The mean back pain NRS improved from 4.61 to 2.78 (p=0.003) and the leg pain NRS improved from 6.89 to 2.39 (p<0.001). The mean K-ODI score also improved from 27.33 to 13.83 (p<0.001). The ROM decreased below 2.0 degrees at the 3-month assessment, and remained less than 2 degrees through the 1 year postoperative assessment. Every local autologous bone graft and DBM packed cage showed bone bridge formation. On the quantitative analysis of bone growth, the autologous bone grafts showed significantly higher bone growth compared to DBM on both coronal and sagittal images (p<0.001 and p=0.028, respectively). Osteoporotic patients showed less bone growth on sagittal images. Conclusion : Though DBM alone can induce favorable bone bridging in lumbar interbody fusion, it is still inferior to autologous bone grafts. Therefore, DBM is recommended as a bone graft extender rather than bone void filler, particularly in patients with osteoporosis.

Vision and Lidar Sensor Fusion for VRU Classification and Tracking in the Urban Environment (카메라-라이다 센서 융합을 통한 VRU 분류 및 추적 알고리즘 개발)

  • Kim, Yujin;Lee, Hojun;Yi, Kyongsu
    • Journal of Auto-vehicle Safety Association
    • /
    • v.13 no.4
    • /
    • pp.7-13
    • /
    • 2021
  • This paper presents an vulnerable road user (VRU) classification and tracking algorithm using vision and LiDAR sensor fusion method for urban autonomous driving. The classification and tracking for vulnerable road users such as pedestrian, bicycle, and motorcycle are essential for autonomous driving in complex urban environments. In this paper, a real-time object image detection algorithm called Yolo and object tracking algorithm from LiDAR point cloud are fused in the high level. The proposed algorithm consists of four parts. First, the object bounding boxes on the pixel coordinate, which is obtained from YOLO, are transformed into the local coordinate of subject vehicle using the homography matrix. Second, a LiDAR point cloud is clustered based on Euclidean distance and the clusters are associated using GNN. In addition, the states of clusters including position, heading angle, velocity and acceleration information are estimated using geometric model free approach (GMFA) in real-time. Finally, the each LiDAR track is matched with a vision track using angle information of transformed vision track and assigned a classification id. The proposed fusion algorithm is evaluated via real vehicle test in the urban environment.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

A Performance Test of Mobile Cloud Service for Bayesian Image Fusion (베이지안 영상융합을 적용한 모바일 클라우드 성능실험)

  • Kang, Sanggoo;Lee, Kiwon
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.4
    • /
    • pp.445-454
    • /
    • 2014
  • In recent days, trend technologies for cloud, bigdata, or mobile, as the important marketable keywords or paradigm in Information Communication Technology (ICT), are widely used and interrelated each other in the various types of platforms and web-based services. Especially, the combination of cloud and mobile is recognized as one of a profitable business models, holding benefits of their own. Despite these challenging aspects, there are a few application cases of this model dealing with geo-based data sets or imageries. Among many considering points for geo-based cloud application on mobile, this study focused on a performance test of mobile cloud of Bayesian image fusion algorithm with satellite images. Two kinds of cloud platform of Amazon and OpenStack were built for performance test by CPU time stamp. In fact, the scheme for performance test of mobile cloud is not established yet, so experiment conditions applied in this study are to check time stamp. As the result, it is revealed that performance in two platforms is almost same level. It is implied that open source mobile cloud services based on OpenStack are enough to apply further applications dealing with geo-based data sets.

Pose-invariant Face Recognition using Cylindrical Model and Stereo Camera (원통 모델과 스테레오 카메라를 이용한 포즈 변화에 강인한 얼굴인식)

  • ;;David Han
    • Proceedings of the IEEK Conference
    • /
    • 2003.07e
    • /
    • pp.2012-2015
    • /
    • 2003
  • This paper proposes a pose-invariant face recognition method using cylindrical model and stereo camera. We divided this paper into two parts. One is single input image case, the other is stereo input image case. In single input image case, we normalized a face's yaw pose using cylindrical model, and in stereo input image case, we normalized a face's pitch pose using cylindrical model with estimated object's pitch pose by stereo geometry. Also, since we have advantage that we can utilize two images acquired at the same time, we can increase overall recognition rate by decision-level fusion. By experiment, we confirmed that recognition rate could be increased using our methods.

  • PDF

Image Fusion Watermarks Using Multiresolution Wavelet Transform (다해상도 웨이블릿 변환을 이용한 영상 융합 워터마킹 기법)

  • Kim Dong-Hyun;Ahn Chi-Hyun;Jun Kye-Suk;Lee Dae-Young
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.42 no.6
    • /
    • pp.83-92
    • /
    • 2005
  • This paper presents a watermarking approach that the 1-level Discrete Wavelet Transform(DWT) coefficients of a $64{\ast}64$ binary logo image as watermarks are inserted in LL band and other specific frequency bands of the host image using Multi-Resolution Analysis(MRA) Wavelet transform for copyright protection of image data. The DWT coefficients of the binary logo image are inserted in blocks of LL band and specific bands of the host image that the 3-level DWT has been performed in the same orientation. We investigate Significant Coefficients(SCs) in each block of the frequency areas in order to prevent the quality deterioration of the host image and the watermark is inserted by SCs. When the host image is distorted by difference of the distortion degree in each frequency, we set the thresholds of SCs on each frequency and completely insert the watermark in each frequency of the host image. In order to be invisibility of the watermark, the Human Visual System(HVS) is applied to the watermark. We prove the proper embedding method by experiment. Thereby, we rapidly detect the watermark using this watermarking method and because the small size watermarks are inserted by HVS and SCs, the results confirm the superiority of the proposed method on invisibility and robustness.

DA-Res2Net: a novel Densely connected residual Attention network for image semantic segmentation

  • Zhao, Xiaopin;Liu, Weibin;Xing, Weiwei;Wei, Xiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4426-4442
    • /
    • 2020
  • Since scene segmentation is becoming a hot topic in the field of autonomous driving and medical image analysis, researchers are actively trying new methods to improve segmentation accuracy. At present, the main issues in image semantic segmentation are intra-class inconsistency and inter-class indistinction. From our analysis, the lack of global information as well as macroscopic discrimination on the object are the two main reasons. In this paper, we propose a Densely connected residual Attention network (DA-Res2Net) which consists of a dense residual network and channel attention guidance module to deal with these problems and improve the accuracy of image segmentation. Specifically, in order to make the extracted features equipped with stronger multi-scale characteristics, a densely connected residual network is proposed as a feature extractor. Furthermore, to improve the representativeness of each channel feature, we design a Channel-Attention-Guide module to make the model focusing on the high-level semantic features and low-level location features simultaneously. Experimental results show that the method achieves significant performance on various datasets. Compared to other state-of-the-art methods, the proposed method reaches the mean IOU accuracy of 83.2% on PASCAL VOC 2012 and 79.7% on Cityscapes dataset, respectively.

The Study on the Development of the HD(High Definition) Level Triple Streaming Hybrid Security Camera (HD급 트리플 스트리밍 하이브리드 보안 카메라 개발에 관한 연구)

  • Lee, JaeHee;Cho, TaeKyung;Seo, ChangJin
    • The Transactions of the Korean Institute of Electrical Engineers P
    • /
    • v.66 no.4
    • /
    • pp.252-257
    • /
    • 2017
  • In this paper for developing and implementing the HD level triple streaming hybrid security camera which output the three type of video outputs(HD-SDI, EX-SDI, Analog). We design the hardware and program the firmware supporting the main and sub functions. We use MN34229PL as image sensor, EN778, EN331 as image processor, KA909A as reset, iris, day&night function part, A3901SEJTR-T as zoom/focus control part. We request the performance test of developed security camera at the broadcasting and communication fusion testing department of TTA (Telecommunication Technology Association). We can get the three outputs (HD-SDI, EX-SDI, Analog) from the developed security camera, get the world best level at the jitter and eye pattern amplitude value and exceed the world best level at the signal/noise ratio, and minium illumination, power consumption part. The HD level triple streaming hybrid security camera in this paper will be widely used at the security camera because of the better performance and function.

WAVELET-BASED FOREST AREAS CLASSIFICATION BY USING HIGH RESOLUTION IMAGERY

  • Yoon Bo-Yeol;Kim Choen
    • Proceedings of the KSRS Conference
    • /
    • 2005.10a
    • /
    • pp.698-701
    • /
    • 2005
  • This paper examines that is extracted certain information in forest areas within high resolution imagery based on wavelet transformation. First of all, study areas are selected one more species distributed spots refer to forest type map. Next, study area is cut 256 x 256 pixels size because of image processing problem in large volume data. Prior to wavelet transformation, five texture parameters (contrast, dissimilarity, entropy, homogeneity, Angular Second Moment (ASM≫ calculated by using Gray Level Co-occurrence Matrix (GLCM). Five texture images are set that shifting window size is 3x3, distance .is 1 pixel, and angle is 45 degrees used. Wavelet function is selected Daubechies 4 wavelet basis functions. Result is summarized 3 points; First, Wavelet transformation images derived from contrast, dissimilarity (texture parameters) have on effect on edge elements detection and will have probability used forest road detection. Second, Wavelet fusion images derived from texture parameters and original image can apply to forest area classification because of clustering in Homogeneous forest type structure. Third, for grading evaluation in forest fire damaged area, if data fusion of established classification method, GLCM texture extraction concept and wavelet transformation technique effectively applied forest areas (also other areas), will obtain high accuracy result.

  • PDF

Watermarking Using Multiresolution Wavelet Transform and Image Fusion (다중 해상도 웨이블릿 변환과 영상 융합을 이용한 워터마킹)

  • Kim Dong-Hyun;Jun Kye-Suk;Lee Dae-Young
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.729-736
    • /
    • 2005
  • In this paper. the proposed method for the digital watermarking is based on the multiresolution wavelet transform. The 1-level Discrete Wavelet Transform(DWT) coefficients of a $2N_{wx}{\times}2N_{wy}$ binary logo image used as a watermarks. The LL band and middle frequency band of the host image that the 3-level DWT has been performed are divided into $N_{wx}{\times}N_{wy}$ size and we use large coefficients at the divided blocks to make threshold. we set the thresholds that completely insert the watermark in each frequency of the host image. The thresholds in each frequency of the host image differ each other. The watermarks where is the same positions are added to the larger coefficients than threshold in the blocks at LL band and middle frequency band in order to prevent the quality deterioration of the host image. The watermarks are inserted in LL band and middle frequency band of the host image. In order to be invisibility of the watermark, the Human Visual System(HVS) is applied to the watermark. We prove the proper embedding method by experiment. We rapidly detect the watermark using this watermarking method. And because the small size watermarks are inserted by HVS, the results confirm the superiority of the proposed method on invisibility and robustness.