• Title/Summary/Keyword: 2D frames

Search Result 315, Processing Time 0.032 seconds

A Parallel Hardware Architecture for H.264/AVC Deblocking Filter (H.264/AVC를 위한 블록현상 제거필터의 병렬 하드웨어 구조)

  • Jeong, Yong-Jin;Kim, Hyun-Jip
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.43 no.10 s.352
    • /
    • pp.45-53
    • /
    • 2006
  • In this paper, we proposed a parallel hardware architecture for deblocking filter in K264/AVC. The deblocking filter has high efficiency in H.264/AVC, but it also has high computational complexity. For real time video processing, we chose a two 1-D parallel filter architecture, and tried to reduce memory access using dual-port SRAM. The proposed architecture has been described in Verilog-HDL and synthesized on Hynix 0.25um CMOS Cell Library using Synopsys Design Compiler. The hardware size was about 27.3K logic gates (without On-chip Memory) and the maximum operating frequency was 100Mhz. It consumes 258 clocks to process one macroblock, witch means it can process 47.8 HD1080P(1920pixel* 1080pixel) frames per second. It seems that it can be used for real time H.264/AVC encoding and decoding of various multimedia applications.

A Moving Camera Localization using Perspective Transform and Klt Tracking in Sequence Images (순차영상에서 투영변환과 KLT추적을 이용한 이동 카메라의 위치 및 방향 산출)

  • Jang, Hyo-Jong;Cha, Jeong-Hee;Kim, Gye-Young
    • The KIPS Transactions:PartB
    • /
    • v.14B no.3 s.113
    • /
    • pp.163-170
    • /
    • 2007
  • In autonomous navigation of a mobile vehicle or a mobile robot, localization calculated from recognizing its environment is most important factor. Generally, we can determine position and pose of a camera equipped mobile vehicle or mobile robot using INS and GPS but, in this case, we must use enough known ground landmark for accurate localization. hi contrast with homography method to calculate position and pose of a camera by only using the relation of two dimensional feature point between two frames, in this paper, we propose a method to calculate the position and the pose of a camera using relation between the location to predict through perspective transform of 3D feature points obtained by overlaying 3D model with previous frame using GPS and INS input and the location of corresponding feature point calculated using KLT tracking method in current frame. For the purpose of the performance evaluation, we use wireless-controlled vehicle mounted CCD camera, GPS and INS, and performed the test to calculate the location and the rotation angle of the camera with the video sequence stream obtained at 15Hz frame rate.

The Kinematic Analysis of the Last Approach Stride and Take-off Phase of BKH Athlete in the High Jump (남자 높이뛰기 BKH 선수를 중심으로 한 도움닫기 마지막 1보와 발구름 국면의 운동학적 분석)

  • Yoon, Hee-Joong;Kim, Tae-Sam;Lee, Jin-Taek
    • Korean Journal of Applied Biomechanics
    • /
    • v.15 no.3
    • /
    • pp.105-115
    • /
    • 2005
  • This study was investigated the kinematic factors of the last approach strides and. take off motion for the skill improving of BKH elite male athlete. 'The subjects chosen for the study were BKH and. KASZCZYK Emillian male athletes who were participated in 2003 Dae-Gu Universiad Games. Three high speed video cameras set in 60frames/s setting were used. for recording from the last approach strides to the apex position. After digitizing motion, the Direct Linear Transformation(DLT) technique was employed to obtain 3-D position coordinates, The kinematic factors of the distance, velocity and angle variable were calculated for Kwon3D 3.1. The following conclusions were drawn; 1. It showed longer stride length, as well as faster horizontal and lateral velocity than the success trial during the approach phase. For consistent of the approach rhythm, it appeared that the subject should a short length for obtain the breaking force by the lower COG during the approach phase. 2. The body lean angle showed a small angle by a high COG during the take-off phase. For obtain the vertical displacement of the COG and a enough space form the bar after take-off, it appeared that the subject should increase the body lean angle. 3. For obtain the vertical force during the takeoff phase, it appeared that the subject should keep straight as possible the knee joint. Therefor, the subject can be obtain a enough breaking force at the approach landing.

A Study on the Automation Process of BIM Library Creation of Air Handling Unit - Development of Revit API module for efficiency and uniformity of library creation - (공기조화기의 BIM 라이브러리 생성 자동화 프로세스에 관한 연구 - 라이브러리 생성의 효율성과 통일성 확보를 위한 Revit API 모듈 개발 -)

  • Kim, Han-Joo;Choi, Myung-Hwan;Kim, Jay-Jung
    • Journal of the Architectural Institute of Korea Structure & Construction
    • /
    • v.34 no.4
    • /
    • pp.75-82
    • /
    • 2018
  • BIM(Building Information Modeling) based design process can initiatively conduct a task through all phases from early design step to construction and maintenance step. Also BIM efficiently manage the building's energy by reflecting 3D design and construction life cycle. This paper proposes an efficient process to build AHU's BIM-based library. This study involves analyzing an AHU model for development of design module, and making the template model using the same 12 parts including the shapes of ducts, doors and frames. In consideration of each shape's direction and the status of existence, which are detailed shapes of parts upon making the template model, all the shapes of the AHU model can be expressed. By applying parametric modeling to the template model, a quick and precise modification and transformation can be conducted, thus the efficiency is enhanced. A user selects an AHU model from a 2D model selection program, and extracts shape information. The final AHU shape is completed through the automation work of unnecessary shape deletion by bringing the extracted shape information to the template model. This enables the user to build efficient AHU's BIM-based library, since the quick and precise modification and transformation of the template model are possibile, and all AHU model shapes can be expressed.

Structural Shape Estimation Based on 3D LiDAR Scanning Method for On-site Safety Diagnostic of Plastic Greenhouse (비닐 온실의 현장 안전진단을 위한 3차원 LiDAR 스캔 기법 기반 구조 형상 추정)

  • Seo, Byung-hun;Lee, Sangik;Lee, Jonghyuk;Kim, Dongsu;Kim, Dongwoo;Jo, Yerim;Kim, Yuyong;Lee, Jeongmin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.5
    • /
    • pp.1-13
    • /
    • 2024
  • In this study, we applied an on-site diagnostic method for estimating the structural safety of a plastic greenhouse. A three-dimensional light detection and ranging (3D LiDAR) sensor was used to scan the greenhouse to extract point cloud data (PCD). Differential thresholds of the color index were applied to the partitions of raw PCD to separate steel frames from plastic films. Additionally, the K-means algorithm was used to convert the steel frame PCD into the nodes of unit members. These nodes were subsequently transformed into structural shape data. To verify greenhouse shape reproducibility, the member lengths of the scan and blueprint models were compared with the measurements along the X-, Y-, and Z-axes. The error of the scan model was accurate at 2%-3%, whereas the error of the blueprint model was 5.4%. At a maximum snow depth of 0.5 m, the scan model revealed asymmetric horizontal deflection and extreme bending stress, which indicated that even minor shape irregularities could result in critical failures in extreme weather. The safety factor for bending stress in the scan model was 18.7% lower than that in the blueprint model. This phenomenon indicated that precise shape estimation is crucial for safety diagnostic. Future studies should focus on the development of an automated process based on supervised learning to ensure the widespread adoption of greenhouse safety diagnostics.

Projective Reconstruction Method for 3D modeling from Un-calibrated Image Sequence (비교정 영상 시퀀스로부터 3차원 모델링을 위한 프로젝티브 재구성 방법)

  • Hong Hyun-Ki;Jung Yoon-Yong;Hwang Yong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.42 no.2 s.302
    • /
    • pp.113-120
    • /
    • 2005
  • 3D reconstruction of a scene structure from un-calibrated image sequences has been long one of the central problems in computer vision. For 3D reconstruction in Euclidean space, projective reconstruction, which is classified into the merging method and the factorization, is needed as a preceding step. By calculating all camera projection matrices and structures at the same time, the factorization method suffers less from dia and error accumulation than the merging. However, the factorization is hard to analyze precisely long sequences because it is based on the assumption that all correspondences must remain in all views from the first frame to the last. This paper presents a new projective reconstruction method for recovery of 3D structure over long sequences. We break a full sequence into sub-sequences based on a quantitative measure considering the number of matching points between frames, the homography error, and the distribution of matching points on the frame. All of the projective reconstructions of sub-sequences are registered into the same coordinate frame for a complete description of the scene. no experimental results showed that the proposed method can recover more precise 3D structure than the merging method.

Embedded SoC Design for H.264/AVC Decoder (H.264/AVC 디코더를 위한 Embedded SoC 설계)

  • Kim, Jin-Wook;Park, Tae-Geun
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.9
    • /
    • pp.71-78
    • /
    • 2008
  • In this paper, we implement the H.264/AVC baseline decoder by hardware-software partitioning under the embedded Linux Kernel 2.4.26 and the FPGA-based target board with ARM926EJ-S core. We design several IPs for the time-demanding blocks, such as motion compensation, deblocking filter, and YUV-to-RGB and they are communicated with the host through the AMBA bus protocol. We also try to minimize the number of memory accesses between IPs and the reference software (JM 11.0) which is ported in the embedded Linux. The proposed IPs and the system have been designed and verified in several stages. The proposed system decodes the QCIF sample video at 2 frame per second when 24MHz of system clock is running and we expect the bitter performance if the proposed system is designed with ASIC.

Observational Studies on Evolved Stars Using KVN and KaVA/EAVN

  • Cho, Se-Hyung;Yun, Youngjoo;Imai, Hiroshi
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.2
    • /
    • pp.51.1-51.1
    • /
    • 2019
  • At the commissioning phase of KVN from 2009 to 2013, single-dish survey and monitoring observations were performed toward about 1000 evolved stars and about 60 relatively strong SiO and H2O maser sources respectively. Based on these single-dish results and VLBI feasibility test observations at K/Q/W/D bands in 2014, KVN Key Science Project (KSP) has started from 2015 and will be completed in 2019 as KSP phase I. Here we present the overview of observational studies on evolved stars using KVN. In KSP phase I, we have focused on nine KSP sources which show a successful astrometrically registered maps of SiO and H2O masers using the source frequency phase referencing method. We aim at investigating the spatial structure and dynamical effect from 43/42/86/129 GHz SiO to 22 GHz H2O maser regions associated with a stellar pulsation and development of asymmetry in circumstellar envelopes. Using the combined network KaVA (KVN+Japanese VLBI network VERA), KaVA Large Program titled on "Expanded Study on Stellar Masers: ESTEMA Phase I" was performed from 2015 to 2016. Based on ESTEMA Phase I, EAVN Large Program titled on "EAVN Synthesis of Stellar Maser Animations: ESTEMA Phase II" was also performed from 2018. The ESTEMA II project aims to publish composite animations of circumstellar H2O and SiO masers, which taken from up to 6 long-period variable stars with a variety of the pulsation periods (333-1000 days). The animations will exhibit the three-dimensional kinematics of the maser gas clumps with complexity caused by stellar pulsation-driven shock waves and anisotropy of clump ejections from the stellar surface. Adding three EAVN telescopes (Tianma 65m, Nanshan 26m and NRO 45m telescopes) with KaVA always secures the high quality of the maser image frames through the monitoring program.

  • PDF

Real-time Depth Image Refinement using Hierarchical Joint Bilateral Filter (계층적 결합형 양방향 필터를 이용한 실시간 깊이 영상 보정 방법)

  • Shin, Dong-Won;Hoa, Yo-Sung
    • Journal of Broadcast Engineering
    • /
    • v.19 no.2
    • /
    • pp.140-147
    • /
    • 2014
  • In this paper, we propose a method for real-time depth image refinement. In order to improve the quality of the depth map acquired from Kinect camera, we employ constant memory and texture memory which are suitable for a 2D image processing in the graphics processing unit (GPU). In addition, we applied the joint bilateral filter (JBF) in parallel to accelerate the overall execution. To enhance the quality of the depth image, we applied the JBF hierarchically using the compute unified device architecture (CUDA). Finally, we obtain the refined depth image. Experimental results showed that the proposed real-time depth image refinement algorithm improved the subjective quality of the depth image and the computational time was 260 frames per second.

Intelligent Diagnosis Assistant System of Capsule Endoscopy Video Through Analysis of Video Frames (영상 프레임 분석을 통한 대용량 캡슐내시경 영상의 지능형 판독보조 시스템)

  • Lee, H.G.;Choi, H.K.;Lee, D.H.;Lee, S.C.
    • Journal of Intelligence and Information Systems
    • /
    • v.15 no.2
    • /
    • pp.33-48
    • /
    • 2009
  • Capsule endoscopy is one of the most remarkable inventions in last ten years. Causing less pain for patients, diagnosis for entire digestive system has been considered as a most convenience method over a normal endoscope. However, it is known that the diagnosis process typically requires very long inspection time for clinical experts because of considerably many duplicate images of same areas in human digestive system due to uncontrollable movement of a capsule endoscope. In this paper, we propose a method for clinical diagnosticians to get highly valuable information from capsule-endoscopy video. Our software system consists of three global maps, such as movement map, characteristic map, and brightness map, in temporal domain for entire sequence of the input video. The movement map can be used for effectively removing duplicated adjacent images. The characteristic and brightness maps provide frame content analyses that can be quickly used for segmenting regions or locating some features(such as blood) in the stream. Our experiments show the results of four patients having different health conditions. The result maps clearly capture the movements and characteristics from the image frames. Our method may help the diagnosticians quickly search the locations of lesion, bleeding, or some other interesting areas.

  • PDF