• Title/Summary/Keyword: 3D feature value

Search Result 84, Processing Time 0.024 seconds

Security Analysis based on Differential Entropy m 3D Model Hashing (3D 모델 해싱의 미분 엔트로피 기반 보안성 분석)

  • Lee, Suk-Hwan;Kwon, Ki-Ryong
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.12C
    • /
    • pp.995-1003
    • /
    • 2010
  • The content-based hashing for authentication and copy protection of image, video and 3D model has to satisfy the robustness and the security. For the security analysis of the hash value, the modelling method based on differential entropy had been presented. But this modelling can be only applied to the image hashing. This paper presents the modelling for the security analysis of the hash feature value in 3D model hashing based on differential entropy. The proposed security analysis modeling design the feature extracting methods of two types and then analyze the security of two feature values by using differential entropy modelling. In our experiment, we evaluated the security of feature extracting methods of two types and discussed about the trade-off relation of the security and the robustness of hash value.

3D feature point extraction technique using a mobile device (모바일 디바이스를 이용한 3차원 특징점 추출 기법)

  • Kim, Jin-Kyum;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.256-257
    • /
    • 2022
  • In this paper, we introduce a method of extracting three-dimensional feature points through the movement of a single mobile device. Using a monocular camera, a 2D image is acquired according to the camera movement and a baseline is estimated. Perform stereo matching based on feature points. A feature point and a descriptor are acquired, and the feature point is matched. Using the matched feature points, the disparity is calculated and a depth value is generated. The 3D feature point is updated according to the camera movement. Finally, the feature point is reset at the time of scene change by using scene change detection. Through the above process, an average of 73.5% of additional storage space can be secured in the key point database. By applying the algorithm proposed to the depth ground truth value of the TUM Dataset and the RGB image, it was confirmed that the\re was an average distance difference of 26.88mm compared with the 3D feature point result.

  • PDF

3D Point Cloud Reconstruction Technique from 2D Image Using Efficient Feature Map Extraction Network (효율적인 feature map 추출 네트워크를 이용한 2D 이미지에서의 3D 포인트 클라우드 재구축 기법)

  • Kim, Jeong-Yoon;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.26 no.3
    • /
    • pp.408-415
    • /
    • 2022
  • In this paper, we propose a 3D point cloud reconstruction technique from 2D images using efficient feature map extraction network. The originality of the method proposed in this paper is as follows. First, we use a new feature map extraction network that is about 27% efficient than existing techniques in terms of memory. The proposed network does not reduce the size to the middle of the deep learning network, so important information required for 3D point cloud reconstruction is not lost. We solved the memory increase problem caused by the non-reduced image size by reducing the number of channels and by efficiently configuring the deep learning network to be shallow. Second, by preserving the high-resolution features of the 2D image, the accuracy can be further improved than that of the conventional technique. The feature map extracted from the non-reduced image contains more detailed information than the existing method, which can further improve the reconstruction accuracy of the 3D point cloud. Third, we use a divergence loss that does not require shooting information. The fact that not only the 2D image but also the shooting angle is required for learning, the dataset must contain detailed information and it is a disadvantage that makes it difficult to construct the dataset. In this paper, the accuracy of the reconstruction of the 3D point cloud can be increased by increasing the diversity of information through randomness without additional shooting information. In order to objectively evaluate the performance of the proposed method, using the ShapeNet dataset and using the same method as in the comparative papers, the CD value of the method proposed in this paper is 5.87, the EMD value is 5.81, and the FLOPs value is 2.9G. It was calculated. On the other hand, the lower the CD and EMD values, the better the accuracy of the reconstructed 3D point cloud approaches the original. In addition, the lower the number of FLOPs, the less memory is required for the deep learning network. Therefore, the CD, EMD, and FLOPs performance evaluation results of the proposed method showed about 27% improvement in memory and 6.3% in terms of accuracy compared to the methods in other papers, demonstrating objective performance.

Implementation of Object-based Multiview 3D Display Using Adaptive Disparity-based Segmentation

  • Park, Jae-Sung;Kim, Seung-Cheol;Bae, Kyung-Hoon;Kim, Eun-Soo
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2005.07b
    • /
    • pp.1615-1618
    • /
    • 2005
  • In this paper, implementation of object-based multiview 3D display using object segmentation and adaptive disparity estimation is proposed and its performance is analyzed by comparison to that of the conventional disparity estimation algorithms. In the proposed algorithm, firstly we can get segmented objects by region growing from input stereoscopic image pair and then, in order to effectively synthesize the intermediate view the matching window size is selected according to the extracted feature value of the input stereo image pair. Also, the matching window size for the intermediate view reconstruction (IVR) is adaptively selected in accordance with the magnitude of the extracted feature value from the input stereo image pair. In addition, some experimental results on the IVR using the proposed algorithm is also discussed and compared with that of the conventional algorithms.

  • PDF

A Prototype Implementation for 3D Feature Visualization on Cell Phone using M3G API

  • Lee, Ki-Won;Dong, Woo-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.24 no.3
    • /
    • pp.245-250
    • /
    • 2008
  • According to public and industrial interests on mobile graphics, a preliminary implementation regarding 3D feature visualization system on cell phone was performed using M3G API, one of the de-facto standards for mobile 3D graphic API. Through this experiment, it is revealed that scene graph structure and 3D mobile file format supported from this API is useful one for 3D geo-modeling and rendering in mobile environment. It is necessary that 3D mobile graphic standards can be considered as one component of current mobile GIS services standards to provide value-added 3D GIS contents.

LiDAR Data Interpolation Algorithm for 3D-2D Motion Estimation (3D-2D 모션 추정을 위한 LiDAR 정보 보간 알고리즘)

  • Jeon, Hyun Ho;Ko, Yun Ho
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.12
    • /
    • pp.1865-1873
    • /
    • 2017
  • The feature-based visual SLAM requires 3D positions for the extracted feature points to perform 3D-2D motion estimation. LiDAR can provide reliable and accurate 3D position information with low computational burden, while stereo camera has the problem of the impossibility of stereo matching in simple texture image region, the inaccuracy in depth value due to error contained in intrinsic and extrinsic camera parameter, and the limited number of depth value restricted by permissible stereo disparity. However, the sparsity of LiDAR data may increase the inaccuracy of motion estimation and can even lead to the result of motion estimation failure. Therefore, in this paper, we propose three interpolation methods which can be applied to interpolate sparse LiDAR data. Simulation results obtained by applying these three methods to a visual odometry algorithm demonstrates that the selective bilinear interpolation shows better performance in the view point of computation speed and accuracy.

3D Face Recognition using Local Depth Information

  • 이영학;심재창;이태홍
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.11
    • /
    • pp.818-825
    • /
    • 2002
  • Depth information is one of the most important factor for the recognition of a digital face image. Range images are very useful, when comparing one face with other faces, because of implicating depth information. As the processing for the whole fare produces a lot of calculations and data, face images ran be represented in terms of a vector of feature descriptors for a local area. In this paper, depth areas of a 3 dimensional(3D) face image were extracted by the contour line from some depth value. These were resampled and stored in consecutive location in feature vector using multiple feature method. A comparison between two faces was made based on their distance in the feature space, using Euclidian distance. This paper reduced the number of index data in the database and used fewer feature vectors than other methods. Proposed algorithm can be highly recognized for using local depth information and less feature vectors or the face.

A study on structural feature and size distribution of swimming fish using an 3 dimensional pattern laser (3차원 패턴 레이저를 이용한 유영어류의 형태 및 크기 측정)

  • YANG, Yongsu;LEE, Kyounghoon;PYEON, Yongbeom;YOON, Eun-A;LEE, Dong-Gil;JO, Hyun-Su
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.52 no.2
    • /
    • pp.103-110
    • /
    • 2016
  • This study aims to estimate the species, size and shape of fish using a non-contact 3 dimensional pattern laser so that this preliminary test was carried out to understand the structural feature and length of goldfish according to water turbidity and depth in the aquacultural tank. 3-D pattern laser could clearly detect its morphological shape except the caudal fin due to soft tissue. Since the sensing strength of line laser light according to depth has sufficient power, it is possible to measure its depth and structural feature in the detected range. The result showed that the measured error of individual's fork length was less than ${\pm}1%$ in the water using 3-D pattern laser, when compared with the measured value in the air.

3D Face Modeling from a Frontal Face Image by Mesh-Warping (메쉬 워핑에 의한 정면 영상으로부터의 3D 얼굴 모델링)

  • Kim, Jung-Sik;Kim, Jin-Mo;Cho, Hyung-Je
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.1
    • /
    • pp.108-118
    • /
    • 2013
  • Recently the 3D modeling techniques were developed rapidly due to rapid development of computer vision, computer graphics with the excellent performance of hardware. With the advent of a variety of 3D contents, 3D modeling technology becomes more in demand and it's quality is increased. 3D face models can be applied widely to such contents with high usability. In this paper, a 3D face modeling is attempted from a given single 2D frontal face image. To achieve the goal, we thereafter the feature points using AAM are extracted from the input frontal face image. With the extracted feature points we deform the 3D general model by 2-pass mesh warping, and also the depth extraction based on intensity values is attempted to. Throughout those processes, a universal 3D face modeling method with less expense and less restrictions to application environment was implemented and it's validity was shown through experiments.

Curvature and Histogram of oriented Gradients based 3D Face Recognition using Linear Discriminant Analysis

  • Lee, Yeunghak
    • Journal of Multimedia Information System
    • /
    • v.2 no.1
    • /
    • pp.171-178
    • /
    • 2015
  • This article describes 3 dimensional (3D) face recognition system using histogram of oriented gradients (HOG) based on face curvature. The surface curvatures in the face contain the most important personal feature information. In this paper, 3D face images are recognized by the face components: cheek, eyes, mouth, and nose. For the proposed approach, the first step uses the face curvatures which present the facial features for 3D face images, after normalization using the singular value decomposition (SVD). Fisherface method is then applied to each component curvature face. The reason for adapting the Fisherface method maintains the surface attribute for the face curvature, even though it can generate reduced image dimension. And histogram of oriented gradients (HOG) descriptor is one of the state-of-art methods which have been shown to significantly outperform the existing feature set for several objects detection and recognition. In the last step, the linear discriminant analysis is explained for each component. The experimental results showed that the proposed approach leads to higher detection accuracy rate than other methods.