• Title/Summary/Keyword: View Estimation

Search Result 624, Processing Time 0.026 seconds

Multi-view Semi-supervised Learning-based 3D Human Pose Estimation (다시점 준지도 학습 기반 3차원 휴먼 자세 추정)

  • Kim, Do Yeop;Chang, Ju Yong
    • Journal of Broadcast Engineering
    • /
    • v.27 no.2
    • /
    • pp.174-184
    • /
    • 2022
  • 3D human pose estimation models can be classified into a multi-view model and a single-view model. In general, the multi-view model shows superior pose estimation performance compared to the single-view model. In the case of the single-view model, the improvement of the 3D pose estimation performance requires a large amount of training data. However, it is not easy to obtain annotations for training 3D pose estimation models. To address this problem, we propose a method to generate pseudo ground-truths of multi-view human pose data from a multi-view model and exploit the resultant pseudo ground-truths to train a single-view model. In addition, we propose a multi-view consistency loss function that considers the consistency of poses estimated from multi-view images, showing that the proposed loss helps the effective training of single-view models. Experiments using Human3.6M and MPI-INF-3DHP datasets show that the proposed method is effective for training single-view 3D human pose estimation models.

Improvement of Software Cost Estimation Guideline Using OLAP Multidimensional Model (OLAP 다차원 모델을 이용한 소프트웨어 사업대가기준의 개선)

  • Park, Hye-Ja;Hwang, In-Soo;Kwon, Ki-Tae
    • Journal of Information Technology Services
    • /
    • v.11 no.1
    • /
    • pp.197-210
    • /
    • 2012
  • This paper presents the ways that can improve the Software Cost Estimation Guidelines in order to replace those that are expected to be abolished at February, 2012, and solve the problems that are being occurred in the current Software Cost Estimation Guidelines. By using multidimensional modeling of OLAP(On-Line Analytical Processing), this paper does three dimensional modeling that considers the product/service view, process view and skill view. Also, it presents the identification method of cost estimation data through the view of each dimension. Furthermore, it defines the software cost estimation process and adapts them into the bottom up estimation and the top down estimation. Finally, it proposes the access of cost estimation data by the multidimensional analysis of OLAP.

Inter-view Balanced Disparity Estimation for Mutiview Video Coding (다시점 영상에서 시점간 균형을 맞추는 변이 추정 알고리듬)

  • Yoon, Jae-Won;Kim, Yong-Tae;Sohn, Kwang-Hoon
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.435-436
    • /
    • 2006
  • When working with multi-view images, imbalances between multi-view images occur a serious problem in multi-view video coding because they decrease the performance of disparity estimation. To overcome this problem, we propose inter-view balanced disparity estimation for multi-view video coding. In general, the imbalance problem can be solved by a preprocessing step that transforms reference images linearly. However, there are some problems in pre-processing such as the transformation of the original images. In order to obtain a balancing effect among the views, we perform block-based disparity estimation, which includes several balancing parameters.

  • PDF

Motion Estimation Method Based on Correlations of Motion Vectors for Multi-view Video Coding (다시점 비디오 부호화를 위한 움직임 벡터들의 상관성을 이용한 움직임 추정 기법)

  • Yoon, Hyo-Sun;Kim, Mi-Young
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.10
    • /
    • pp.1131-1141
    • /
    • 2018
  • Motion Estimation which is used to reduce the redundant data plays an important role in video compressions. However, it requires huge computational complexity of the encoder part. And therefore many fast motion estimation methods has been developed to reduce complexity. Multi-view video is obtained by using many cameras at different positions and its complexity increases in proportion to the number of cameras. In this paper, we proposed a fast motion estimation method for multi-view video. The proposed method predicts a search start point by using correlated candidate vectors of the current block. According to the motion size of the start search point, a search start pattern of the current block is decided adaptively. The proposed method proves to be about 2 ~ 5 times faster than existing methods while maintaining similar image quality and bitrates.

Development of 'IceView' Program for Estimation of Ice Resistance on Ice-Transiting Vessels (쇄빙선박에 작용하는 빙저항 산정을 위한 'IceView' 프로그램 개발)

  • Choi, Kyung-Sik;Lee, Jin-Kyoung
    • Journal of Ocean Engineering and Technology
    • /
    • v.19 no.6 s.67
    • /
    • pp.50-57
    • /
    • 2005
  • Ice resistance on ice-transiting vessels is one of th£ important issues concerning th£ design of ships with ice classes. In this study, th£ development of GUI software for estimation of ice resistance on ice-transiting vessels is discussed. lee resistance estimation equations, based on model tests and full-scale sea trial data from many previous research articles, are studied in conjunction with two ship categories i.e., ,icebreakers/supply/tug vessels and ice-strengthened cargo vessels. lee resistance estimation equations are summarized in common format and are compared with each other. The GUI software 'Ice View,' written in MS Visual Basic language, can calculate ice resistances according to varying ice thickness and ship speed. The software can provide the calculated results, with suitable tables and graphs, for easy comparison of each ice resistance estimation equation.

Temporal Prediction Structure and Motion Estimation Method based on the Characteristic of the Motion Vectors (시간적 예측 구조와 움직임 벡터의 특성을 이용한 움직임 추정 기법)

  • Yoon, Hyo Sun;Kim, Mi Young
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.10
    • /
    • pp.1205-1215
    • /
    • 2015
  • Efficient multi-view coding techniques are needed to reduce the complexity of multi-view video which increases in proportion to the number of cameras. To reduce the complexity and maintain image quality and bit-rates, an motion estimation method and temporal prediction structure are proposed in this paper. The proposed motion estimation method exploits the characteristic of motion vector distribution and the motion direction and motion size of the block to place search points and decide the search patten adaptively. And the proposed prediction structure divides every GOP to decide the maximum index of hierarchical B layer and the number of pictures of each B layer. Experiment results show that the complexity reduction of the proposed temporal prediction structure and motion estimation method over hierarchical B pictures prediction structure and TZ search method which are used in JMVC(Joint Multi-view Video Coding) reference model can be up to 45∼70% while maintaining similar video quality and bit rates.

View-Invariant Body Pose Estimation based on Biased Manifold Learning (편향된 다양체 학습 기반 시점 변화에 강인한 인체 포즈 추정)

  • Hur, Dong-Cheol;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.960-966
    • /
    • 2009
  • A manifold is used to represent a relationship between high-dimensional data samples in low-dimensional space. In human pose estimation, it is created in low-dimensional space for processing image and 3D body configuration data. Manifold learning is to build a manifold. But it is vulnerable to silhouette variations. Such silhouette variations are occurred due to view-change, person-change, distance-change, and noises. Representing silhouette variations in a single manifold is impossible. In this paper, we focus a silhouette variation problem occurred by view-change. In previous view invariant pose estimation methods based on manifold learning, there were two ways. One is modeling manifolds for all view points. The other is to extract view factors from mapping functions. But these methods do not support one by one mapping for silhouettes and corresponding body configurations because of unsupervised learning. Modeling manifold and extracting view factors are very complex. So we propose a method based on triple manifolds. These are view manifold, pose manifold, and body configuration manifold. In order to build manifolds, we employ biased manifold learning. After building manifolds, we learn mapping functions among spaces (2D image space, pose manifold space, view manifold space, body configuration manifold space, 3D body configuration space). In our experiments, we could estimate various body poses from 24 view points.

Multi-view Video Coding using View Interpolation (영상 보간을 이용한 다시점 비디오 부호화 방법)

  • Lee, Cheon;Oh, Kwan-Jung;Ho, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.12 no.2
    • /
    • pp.128-136
    • /
    • 2007
  • Since the multi-view video is a set of video sequences captured by multiple array cameras for the same three-dimensional scene, it can provide multiple viewpoint images using geometrical manipulation and intermediate view generation. Although multi-view video allows us to experience more realistic feeling with a wide range of images, the amount of data to be processed increases in proportion to the number of cameras. Therefore, we need to develop efficient coding methods. One of the possible approaches to multi-view video coding is to generate an intermediate image using view interpolation method and to use the interpolated image as an additional reference frame. The previous view interpolation method for multi-view video coding employs fixed size block matching over the pre-determined disparity search range. However, if the disparity search range is not proper, disparity error may occur. In this paper, we propose an efficient view interpolation method using initial disparity estimation, variable block-based estimation, and pixel-level estimation using adjusted search ranges. In addition, we propose a multi-view video coding method based on H.264/AVC to exploit the intermediate image. Intermediate images have been improved about $1{\sim}4dB$ using the proposed method compared to the previous view interpolation method, and the coding efficiency have been improved about 0.5 dB compared to the reference model.

A Distributed Real-time 3D Pose Estimation Framework based on Asynchronous Multiviews

  • Taemin, Hwang;Jieun, Kim;Minjoon, Kim
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.2
    • /
    • pp.559-575
    • /
    • 2023
  • 3D human pose estimation is widely applied in various fields, including action recognition, sports analysis, and human-computer interaction. 3D human pose estimation has achieved significant progress with the introduction of convolutional neural network (CNN). Recently, several researches have proposed the use of multiview approaches to avoid occlusions in single-view approaches. However, as the number of cameras increases, a 3D pose estimation system relying on a CNN may lack in computational resources. In addition, when a single host system uses multiple cameras, the data transition speed becomes inadequate owing to bandwidth limitations. To address this problem, we propose a distributed real-time 3D pose estimation framework based on asynchronous multiple cameras. The proposed framework comprises a central server and multiple edge devices. Each multiple-edge device estimates a 2D human pose from its view and sendsit to the central server. Subsequently, the central server synchronizes the received 2D human pose data based on the timestamps. Finally, the central server reconstructs a 3D human pose using geometrical triangulation. We demonstrate that the proposed framework increases the percentage of detected joints and successfully estimates 3D human poses in real-time.

2D-3D Pose Estimation using Multi-view Object Co-segmentation (다시점 객체 공분할을 이용한 2D-3D 물체 자세 추정)

  • Kim, Seong-heum;Bok, Yunsu;Kweon, In So
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.1
    • /
    • pp.33-41
    • /
    • 2017
  • We present a region-based approach for accurate pose estimation of small mechanical components. Our algorithm consists of two key phases: Multi-view object co-segmentation and pose estimation. In the first phase, we explain an automatic method to extract binary masks of a target object captured from multiple viewpoints. For initialization, we assume the target object is bounded by the convex volume of interest defined by a few user inputs. The co-segmented target object shares the same geometric representation in space, and has distinctive color models from those of the backgrounds. In the second phase, we retrieve a 3D model instance with correct upright orientation, and estimate a relative pose of the object observed from images. Our energy function, combining region and boundary terms for the proposed measures, maximizes the overlapping regions and boundaries between the multi-view co-segmentations and projected masks of the reference model. Based on high-quality co-segmentations consistent across all different viewpoints, our final results are accurate model indices and pose parameters of the extracted object. We demonstrate the effectiveness of the proposed method using various examples.