• Title/Summary/Keyword: 시점

Search Result 9,169, Processing Time 0.033 seconds

On-Line Determination Steady State in Simulation Output (시뮬레이션 출력의 안정상태 온라인 결정에 관한 연구)

  • 이영해;정창식;경규형
    • Proceedings of the Korea Society for Simulation Conference
    • /
    • 1996.05a
    • /
    • pp.1-3
    • /
    • 1996
  • 시뮬레이션 기법을 이용한 시스템의 분석에 있어서 실험의 자동화는 현재 많은 연구와 개발이 진행 중인 분야이다. 컴퓨터와 정보통신 시스템에 대한 시뮬레이션의 예를 들어 보면, 수많은 모델을 대한 시뮬레이션을 수행할 경우 자동화된 실험의 제어가 요구되고 있다. 시뮬레이션 수행회수, 수행길이, 데이터 수집방법 등과 관련하여 시뮬레이션 실험방법이 자동화가 되지 않으면, 시뮬레이션 실험에 필요한 시간과 인적 자원이 상당히 커지게 되며 출력데이터에 대한 분석에 있어서도 어려움이 따르게 된다. 시뮬레이션 실험방법을 자동화하면서 효율적인 시뮬레이션 출력분석을 위해서는 시뮬레이션을 수행하는 경우에 항상 발생하는 초기편의 (initial bias)를 제거하는 문제가 선결되어야 한다. 시뮬레이션 출력분석에 사용되는 데이터들이 초기편의를 반영하지 않는 안정상태에서 수집된 것이어야만 실제 시스템에 대한 올바른 해석이 가능하다. 실제로 시뮬레이션 출력분석과 관련하여 가장 중요하면서도 어려운 문제는 시뮬레이션의 출력데이터가 이루는 추계적 과정 (stochastic process)의 안정상태 평균과 이 평균에 대한 신뢰구간(confidence interval: c. i.)을 구하는 것이다. 한 신뢰구간에 포함되어 있는 정보는 의사결정자에게 얼마나 정확하게 평균을 추정할 구 있는지 알려 준다. 그러나, 신뢰구간을 구성하는 일은 하나의 시뮬레이션으로부터 얻어진 출력데이터가 일반적으로 비정체상태(nonstationary)이고 자동상관(autocorrelated)되어 있기 때문에, 전통적인 통계적인 기법을 직접적으로 이용할 수 없다. 이러한 문제를 해결하기 위해 시뮬레이션 출력데이터 분석기법이 사용된다.본 논문에서는 초기편의를 제거하기 위해서 필요한 출력데이터의 제거시점을 찾는 새로운 기법으로, 유클리드 거리(Euclidean distance: ED)를 이용한 방법과 현재 패턴 분류(pattern classification) 문제에 널리 사용 중인 역전파 신경망(backpropagation neural networks: BNN) 알고리듬을 이용하는 방법을 제시한다. 이 기법들은 대다수의 기존의 기법과는 달리 시험수행(pilot run)이 필요 없으며, 시뮬레이션의 단일수행(single run) 중에 제거시점을 결정할 수 있다. 제거시점과 관련된 기존 연구는 다음과 같다. 콘웨이방법은 현재의 데이터가 이후 데이터의 최대값이나 최소값이 아니면 이 데이터를 제거시점으로 결정하는데, 알고기듬 구조상 온라인으로 제거시점 결정이 불가능하다. 콘웨이방법이 알고리듬의 성격상 온라인이 불가능한 반면, 수정콘웨이방법 (Modified Conway Rule: MCR)은 현재의 데이터가 이전 데이터와 비교했을 때 최대값이나 최소값이 아닌 경우 현재의 데이터를 제거시점으로 결정하기 때문에 온라인이 가능하다. 평균교차방법(Crossings-of-the-Mean Rule: CMR)은 누적평균을 이용하면서 이 평균을 중심으로 관측치가 위에서 아래로, 또는 아래서 위로 교차하는 회수로 결정한다. 이 기법을 사용하려면 교차회수를 결정해야 하는데, 일반적으로 결정된 교차회수가 시스템에 상관없이 일반적으로 적용가능하지 않다는 문제점이 있다. 누적평균방법(Cumulative-Mean Rule: CMR2)은 여러 번의 시험수행을 통해서 얻어진 출력데이터에 대한 총누적평균(grand cumulative mean)을 그래프로 그린 다음, 안정상태인 점을 육안으로 결정한다. 이 방법은 여러 번의 시뮬레이션을 수행에서 얻어진 데이터들의 평균들에 대한 누적평균을 사용하기 매문에 온라인 제거시점 결정이 불가능하며, 작업자가 그래프를 보고 임의로 결정해야 하는 단점이 있다. Welch방법(Welch's Method: WM)은 브라운 브리지(Brownian bridge) 통계량()을 사용하는데, n이 무한에 가까워질 때, 이 브라운 브리지 분포(Brownian bridge distribution)에 수렴하는 성질을 이용한다. 시뮬레이션 출력데이터를 가지고 배치를 구성한 후 하나의 배치를 표본으로 사용한다. 이 기법은 알고리듬이 복잡하고, 값을 추정해야 하는 단점이 있다. Law-Kelton방법(Law-Kelton's Method: LKM)은 회귀 (regression)이론에 기초하는데, 시뮬레이션이 종료된 후 누적평균데이터에 대해서 회귀직선을 적합(fitting)시킨다. 회귀직선의 기울기가 0이라는 귀무가설이 채택되면 그 시점을 제거시점으로 결정한다. 일단 시뮬레이션이 종료된 다음, 데이터가 모아진 순서의 반대 순서로 데이터를 이용하기 때문에 온라인이 불가능하다. Welch절차(Welch's Procedure: WP)는 5회이상의 시뮬레이션수행을 통해 수집한 데이터의 이동평균을 이용해서 시각적으로 제거시점을 결정해야 하며, 반복제거방법을 사용해야 하기 때문에 온라인 제거시점의 결정이 불가능하다. 또한, 한번에 이동할 데이터의 크기(window size)를 결정해야 한다. 지금까지 알아 본 것처럼, 기존의 방법들은 시뮬레이션의 단일 수행 중의 온라인 제거시점 결정의 관점에서는 미약한 면이 있다. 또한, 현재의 시뮬레이션 상용소프트웨어는 작업자로 하여금 제거시점을 임의로 결정하도록 하기 때문에, 실험중인 시스템에 대해서 정확하고도 정량적으로 제거시점을 결정할 수 없게 되어 있다. 사용자가 임의로 제거시점을 결정하게 되면, 초기편의 문제를 효과적으로 해결하기 어려울 뿐만 아니라, 필요 이상으로 너무 많은 양을 제거하거나 초기편의를 해결하지 못할 만큼 너무 적은 양을 제거할 가능성이 커지게 된다. 또한, 기존의 방법들의 대부분은 제거시점을 찾기 위해서 시험수행이 필요하다. 즉, 안정상태 시점만을 찾기 위한 시뮬레이션 수행이 필요하며, 이렇게 사용된 시뮬레이션은 출력분석에 사용되지 않기 때문에 시간적인 손실이 크게 된다.

  • PDF

View-Invariant Body Pose Estimation based on Biased Manifold Learning (편향된 다양체 학습 기반 시점 변화에 강인한 인체 포즈 추정)

  • Hur, Dong-Cheol;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.11
    • /
    • pp.960-966
    • /
    • 2009
  • A manifold is used to represent a relationship between high-dimensional data samples in low-dimensional space. In human pose estimation, it is created in low-dimensional space for processing image and 3D body configuration data. Manifold learning is to build a manifold. But it is vulnerable to silhouette variations. Such silhouette variations are occurred due to view-change, person-change, distance-change, and noises. Representing silhouette variations in a single manifold is impossible. In this paper, we focus a silhouette variation problem occurred by view-change. In previous view invariant pose estimation methods based on manifold learning, there were two ways. One is modeling manifolds for all view points. The other is to extract view factors from mapping functions. But these methods do not support one by one mapping for silhouettes and corresponding body configurations because of unsupervised learning. Modeling manifold and extracting view factors are very complex. So we propose a method based on triple manifolds. These are view manifold, pose manifold, and body configuration manifold. In order to build manifolds, we employ biased manifold learning. After building manifolds, we learn mapping functions among spaces (2D image space, pose manifold space, view manifold space, body configuration manifold space, 3D body configuration space). In our experiments, we could estimate various body poses from 24 view points.

Digital Watermarking on Image for View-point Change and Malicious Attacks (영상의 시점변화와 악의적 공격에 대한 디지털 워터마킹)

  • Kim, Bo-Ra;Seo, Young-Ho;Kim, Dong-Wook
    • Journal of Broadcast Engineering
    • /
    • v.19 no.3
    • /
    • pp.342-354
    • /
    • 2014
  • This paper deals with digital watermarking methods to protect ownership of image with targeting the ultra-high multi-view or free-view image service in which an arbitrary viewpoint image should be rendered at the user side. The main purpose of it is not to propose a superior method to the previous methods but to show how difficult to construct a watermarking scheme to overcome the viewpoint translation attack. Therefore we target the images with various attacks including viewpoint translation. This paper first shows how high the error rate of the extracted watermark data from viewpoint-translated image by basic schemes of the method using 2DDCT(2D discrete cosine transform) and the one using 2DDWT(2D discrete wavelet transform), which are for 2D image. Because the difficulty in watermarking for the viewpoint-translated image comes from the fact that we don't know the translated viewpoint, we propose a scheme to find the translated viewpoint, which uses the image and the corresponding depth information at the original viewpoint. This method is used to construct the two non-blind watermarking methods to be proposed. They are used to show that recovery of the viewpoint affect a great deal of the error rate of the extracted watermark. Also by comparing the performances of the proposed methods and the previous ones, we show that the proposed ones are better in invisibility and robustness, even if they are non-blind.

Deep learning-based Multi-view Depth Estimation Methodology of Contents' Characteristics (다 시점 영상 콘텐츠 특성에 따른 딥러닝 기반 깊이 추정 방법론)

  • Son, Hosung;Shin, Minjung;Kim, Joonsoo;Yun, Kug-jin;Cheong, Won-sik;Lee, Hyun-woo;Kang, Suk-ju
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.4-7
    • /
    • 2022
  • Recently, multi-view depth estimation methods using deep learning network for the 3D scene reconstruction have gained lots of attention. Multi-view video contents have various characteristics according to their camera composition, environment, and setting. It is important to understand these characteristics and apply the proper depth estimation methods for high-quality 3D reconstruction tasks. The camera setting represents the physical distance which is called baseline, between each camera viewpoint. Our proposed methods focus on deciding the appropriate depth estimation methodologies according to the characteristics of multi-view video contents. Some limitations were found from the empirical results when the existing multi-view depth estimation methods were applied to a divergent or large baseline dataset. Therefore, we verified the necessity of obtaining the proper number of source views and the application of the source view selection algorithm suitable for each dataset's capturing environment. In conclusion, when implementing a deep learning-based depth estimation network for 3D scene reconstruction, the results of this study can be used as a guideline for finding adaptive depth estimation methods.

  • PDF

Performance Analysis of 3D-HEVC Video Coding (3D-HEVC 비디오 부호화 성능 분석)

  • Park, Daemin;Choi, Haechul
    • Journal of Broadcast Engineering
    • /
    • v.19 no.5
    • /
    • pp.713-725
    • /
    • 2014
  • Multi-view and 3D video technologies for a next generation video service are widely studied. These technologies can make users feel realistic experience as supporting various views. Because acquisition and transmission of a large number of views require a high cost, main challenges for multi-view and 3D video include view synthesis, video coding, and depth coding. Recently, JCT-3V (joint collaborative team on 3D video coding extension development) has being developed a new standard for multi-view and 3D video. In this paper, major tools adopted in this standard are introduced and evaluated in terms of coding efficiency and complexity. This performance analysis would be helpful for the development of a fast 3D video encoder as well as a new 3D video coding algorithm.

A Video Streaming Scheme for Minimizing Viewpoint Switching Delay in DASH-based Multi-view Video Services (DASH 기반의 다시점 비디오 서비스에서 시점전환 지연 최소화를 위한 비디오 전송 기법)

  • Kim, Sangwook;Yun, Dooyeol;Chung, Kwangsue
    • Journal of KIISE
    • /
    • v.43 no.5
    • /
    • pp.606-612
    • /
    • 2016
  • The multi-view video service based on the DASH(Dynamic Adaptive Streaming over HTTP) switches the viewpoint or object which is selected by the user among the multiple video streams captured by multiple cameras. However, the problem is that the conventional DASH-based multi-view video service takes a long time to switch the viewpoint. The reason is that the conventional scheme switches to the new video stream after consuming all buffered segments of the previous video stream. In this paper, we propose a video streaming scheme for minimizing the viewpoint switching delay in the DASH-based multi-view video service. In order to minimize the viewpoint switching delay, the proposed scheme configures the video streams by controlling the GoP (Group of Pictures) size and controls the client buffer based on bandwidth estimation and playback buffer occupancy. Through the experimental results, we prove that the proposed scheme reduces the viewpoint switching delay.

A Study on the Beginning Time of Flashing Green Signals for Pedestrians (보행신호등 녹색점멸신호의 시작시점에 관한 연구)

  • Shim, Kywan-Bho;Ko, Myoung-Soo;Kim, Jeong-Hyun
    • Journal of Korean Society of Transportation
    • /
    • v.26 no.5
    • /
    • pp.91-100
    • /
    • 2008
  • Pedestrians are exposed to accidents as a result of the lack of understanding the meaning of a flashing green signal. This study was to designed to relate changes of pedestrians' crossing characteristics as a functions of flashing green signal timings. A field survey was conducted to collect pedestrian preference and safety and it was examined by signal operation experiment. Two versions of new pedestrian signal timings were compared to the existing pedestrian signal timings. The results indicated that the number of pedestrians who starts to cross during flashing green signals was significantly decreased when flashing green signals started at 1/2 or 2/3 point of crossing. However, the number of pedestrians who remain in the crossing during red signals was significantly increased when flashing green signals started at 2/3 point of crossing. This study concludes that starting flashing green signals at 1/2 point of crossing is the safest. Also, implication and directions for its practical relevance were discussed.

Boundary Noise Removal and Hole Filling Algorithm for Virtual Viewpoint Image Generation (가상시점 영상 생성을 위한 경계 잡음 제거와 홀 채움 기법)

  • Ko, Min-Soo;Yoo, Ji-Sang
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.8A
    • /
    • pp.679-688
    • /
    • 2012
  • In this paper, performance improved hole-filling algorithm including boundary noise removing pre-process which can be used for an arbitrary view synthesis with given two views is proposed. Boundary noise usually occurs because of the boundary mismatch between the reference image and depth map and common-hole is defined as the occluded region. These boundary noise and common-hole created while synthesizing a virtual view result in some defects and they are usually very difficult to be completely recovered by using only given two images as references. The spiral weighted average algorithm gives a clear boundary of each object by using depth information and the gradient searching algorithm is able to preserve details. In this paper, we combine these two algorithms by using a weighting factor ${\alpha}$ to reflect the strong point of each algorithm effectively in the virtual view synthesis process. The experimental results show that the proposed algorithm performs much better than conventional algorithms.

Differences of Fun and Immersion according to Game User Interfaces in the Virtual Space (게임의 가상공간 환경에서 사용자 인터페이스 속성에 따른 재미와 몰입감 차이)

  • Kim, Ki-Yoon;Lee, Ju-Hwan
    • Journal of Digital Contents Society
    • /
    • v.18 no.8
    • /
    • pp.1489-1494
    • /
    • 2017
  • The purpose of this study is to investigate the limitations of virtual space game contents staying in the first-person viewpoint in the field of digital games with various genres through precedent researches and case studies, and to investigate the differences of immersion, presence, and fun based on the user interfaces, such as the viewpoints (first-person, third-person) and haptic feedback. The experimental results show that when the game is the first-person with the haptic feedback, the immersion, presence, and fun increase. However, the interaction effect between the virtual space environment and the game viewpoint was not found. As a result, it is appropriate to use the haptic feedback and the first-person viewpoint in the virtual reality games, but the fact that the game viewpoints and the virtual space environment are not huge relevant, and the third-person viewpoint is more popular than first-person viewpoint game, therefore the possibility of a third-person viewpoint game would be suggested.

The Design of Spatial-Temporal Prediction Filter for saving resources on the view navigation of a panoramic video service (파노라마 영상에서 효율적인 시점탐색을 위한 시공간 비디오 스트림 예측 필터 설계 방법에 관한 연구)

  • Seok, Joo-Myoung;Cho, Yong-Woo
    • Journal of Advanced Navigation Technology
    • /
    • v.17 no.6
    • /
    • pp.757-764
    • /
    • 2013
  • A panoramic video which supports to make viewers feel an immersion through fitting to a wide field of view (FOV) larger than the human visual angle needs an interactive viewing method such as selecting targeted view point among widely viewing points of a panoramic video because it difficult to simultaneously view a whole panoramic video due to a limited viewing environment and bandwidth. When a user officially uses a view navigation in order to select a view point, it happens waste of resources such as bandwidth owing to the transmitted video data of unnecessary view points. Therefore, this paper proposes the spatial-temporal prediction filter (STPF) which is based on the direction and velocity of the view navigation for transmitting only the necessary video data. As a result of simulation, STPF reduces bitrate saving rates by from 6% to 37% compared to conventional methods in the interactive panoramic video streaming service required high bandwidth.