• Title/Summary/Keyword: Image Capture System

Search Result 254, Processing Time 0.022 seconds

A study for DVD authoring with IEEE 1394 (IEEE 1394를 이용한 DVD Authoring에 관한 연구)

  • Lee Heun-Jung;Yoon Young-Doo
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2005.05a
    • /
    • pp.165-171
    • /
    • 2005
  • we can define the procedure of Authoring that it makes area cord and the reproduction prevent menu programmed into MPEG II video stream , Audio which is AC-3 audio stream and subtitle under its own category. And it makes process an attribute, an order and an operation, gives the last disk image (DVD). There are various types of Authoring tools in the market so that authoring tools can enable, encourage, and assist users ('authors') in the selection of tools that produce simple title and video production and editing suites. In this paper, we will compare and analyze authoring process in which image and sound are captured into DVD creation with IEEE 1394port with regard to Window system using generally with Desktop PC and the Macintosh of the PC on the system of Windows and OSX.

  • PDF

A Study on Object Tracking for Autonomous Mobile Robot using Vision Information (비젼 정보를 이용한 이동 자율로봇의 물체 추적에 관한 연구)

  • Kang, Jin-Gu;Lee, Jang-Myung
    • Journal of the Korea Society of Computer and Information
    • /
    • v.13 no.2
    • /
    • pp.235-242
    • /
    • 2008
  • An Autonomous mobile robot is a very useful system to achieve various tasks in dangerous environment, because it has the higher performance than a fixed base manipulator in terms of its operational workspace size as well as efficiency. A method for estimating the position of an object in the Cartesian coordinate system based upon the geometrical relationship between the image captured by 2-DOF active camera mounted on mobile robot and the real object, is proposed. With this position estimation, a method of determining an optimal path for the autonomous mobile robot from the current position to the position of object estimated by the image information using homogeneous matrices. Finally, the corresponding joint parameters to make the desired displacement are calculated to capture the object through the control of a mobile robot. The effectiveness of proposed method is demonstrated by the simulation and real experiments using the autonomous mobile robot.

  • PDF

Development Treatment Planning System Based on Monte-Carlo Simulation for Boron Neutron Capture Therapy

  • Kim, Moo-Sub;Kubo, Kazuki;Monzen, Hajime;Yoon, Do-Kun;Shin, Han-Back;Kim, Sunmi;Suh, Tae Suk
    • Progress in Medical Physics
    • /
    • v.27 no.4
    • /
    • pp.232-235
    • /
    • 2016
  • The purpose of this study is to develop the treatment planning system (TPS) based on Monte-Carlo simulation for BNCT. In this paper, we will propose a method for dose estimation by Monte-Carlo simulation using the CT image, and will evaluate the accuracy of dose estimation of this TPS. The complicated geometry like a human body allows defining using the lattice function in MCNPX. The results of simulation such as flux or energy deposition averaged over a cell, can be obtained using the features of the tally provided by MCNPX. To assess the dose distribution and therapeutic effect, dose distribution was displayed on the CT image, and dose volume histogram (DVH) was employed in our developed system. The therapeutic effect can be efficiently evaluated by these evaluation tool. Our developed TPS could be effectively performed creating the voxel model from CT image, the estimation of each dose component, and evaluation of the BNCT plan.

Biometric identification of Black Bengal goat: unique iris pattern matching system vs deep learning approach

  • Menalsh Laishram;Satyendra Nath Mandal;Avijit Haldar;Shubhajyoti Das;Santanu Bera;Rajarshi Samanta
    • Animal Bioscience
    • /
    • v.36 no.6
    • /
    • pp.980-989
    • /
    • 2023
  • Objective: Iris pattern recognition system is well developed and practiced in human, however, there is a scarcity of information on application of iris recognition system in animals at the field conditions where the major challenge is to capture a high-quality iris image from a constantly moving non-cooperative animal even when restrained properly. The aim of the study was to validate and identify Black Bengal goat biometrically to improve animal management in its traceability system. Methods: Forty-nine healthy, disease free, 3 months±6 days old female Black Bengal goats were randomly selected at the farmer's field. Eye images were captured from the left eye of an individual goat at 3, 6, 9, and 12 months of age using a specialized camera made for human iris scanning. iGoat software was used for matching the same individual goats at 3, 6, 9, and 12 months of ages. Resnet152V2 deep learning algorithm was further applied on same image sets to predict matching percentages using only captured eye images without extracting their iris features. Results: The matching threshold computed within and between goats was 55%. The accuracies of template matching of goats at 3, 6, 9, and 12 months of ages were recorded as 81.63%, 90.24%, 44.44%, and 16.66%, respectively. As the accuracies of matching the goats at 9 and 12 months of ages were low and below the minimum threshold matching percentage, this process of iris pattern matching was not acceptable. The validation accuracies of resnet152V2 deep learning model were found 82.49%, 92.68%, 77.17%, and 87.76% for identification of goat at 3, 6, 9, and 12 months of ages, respectively after training the model. Conclusion: This study strongly supported that deep learning method using eye images could be used as a signature for biometric identification of an individual goat.

Development of a Sensor Fusion System for Visible Ray and Infrared (적외선 및 가시광선의 센서 융합시스템의 개발)

  • Kim, Dae-Won;Kim, Mo-Gon;Nam, Dong-Hwan;Jung, Soon-Ki;Lim, Soon-Jae
    • Journal of Sensor Science and Technology
    • /
    • v.9 no.1
    • /
    • pp.44-50
    • /
    • 2000
  • Every object emits some energy from its surface. The emission energy forms surface heat distribution which we can capture by using an infrared thermal imager. The infrared thermal image may include valuable information regarding to the subsurface anomaly of the object. Since a thermal image reflects surface clutter and subsurface anomaly, we have difficulty in extracting the information on the subsurface anomaly only with thermal images taken under a wavelength. Thus, we use visible wavelength images of the object surface to remove exterior clutter. We, therefore in this paper, visualize the infrared image for overlaying it with a visible wavelength image. First, we make an interpolated image from two ordinary images taken from both sides of an infrared sensor. Next, we overlay the intermediate image with an infrared image taken from the infrared camera. The technique suggested in this paper can be utilized for analyzing the infrared images on non-destructive inspection against disaster and for safety.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Active omni-directional range sensor for mobile robot navigation (이동 로봇의 자율주행을 위한 전방향 능동거리 센서)

  • 정인수;조형석
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.824-827
    • /
    • 1996
  • Most autonomous mobile robots view things only in front of them. As a result, they may collide against objects moving from the side or behind. To overcome the problem we have built an Active Omni-directional Range Sensor that can obtain omnidirectional depth data by a laser conic plane and a conic mirror. In the navigation of the mobile robot, the proposed sensor system makes a laser conic plane by rotating the laser point source at high speed and achieves two dimensional depth map, in real time, once an image capture. The experimental results show that the proposed sensor system provides the best potential for navigation of the mobile robot in uncertain environment.

  • PDF

Establishing PC-based Object-Oriented Urban Infrastructure Information System using GPS and TotalStation (GPS 및 Total Station을 이용한 PC용 개체지향 도시 기반 시설물 관리 시스템 구축)

  • 유상근;이규석
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.18 no.1
    • /
    • pp.41-49
    • /
    • 2000
  • This study was performed to establish PC-based Urban Infrastructure Information System(UIIS) using GPS and TotalStation as an alternative to UNIX-based UIIS. After carryig out this study, the following conclusions were derived: PC-based UIIS costs less time and money than UNIX-based UIIS. The coordinates of the control point of the site were obtained using DGPS, then based on this point, locational data were obtained using RTK GPS and TotalStation in the site with realtime data capture to enhance the accuracy of locational data. And image data were also entered into database together with the text data. So, the multimedia database is possible in UIIS.

  • PDF

CINEMAPIC : Generative AI-based movie concept photo booth system (시네마픽 : 생성형 AI기반 영화 컨셉 포토부스 시스템)

  • Seokhyun Jeong;Seungkyu Leem;Jungjin Lee
    • Journal of the Korea Computer Graphics Society
    • /
    • v.30 no.3
    • /
    • pp.149-158
    • /
    • 2024
  • Photo booths have traditionally provided a fun and easy way to capture and print photos to cherish memories. These booths allow individuals to capture their desired poses and props, sharing memories with friends and family. To enable diverse expressions, generative AI-powered photo booths have emerged. However, existing AI photo booths face challenges such as difficulty in taking group photos, inability to accurately reflect user's poses, and the challenge of applying different concepts to individual subjects. To tackle these issues, we present CINEMAPIC, a photo booth system that allows users to freely choose poses, positions, and concepts for their photos. The system workflow includes three main steps: pre-processing, generation, and post-processing to apply individualized concepts. To produce high-quality group photos, the system generates a transparent image for each character and enhances the backdrop-composited image through a small number of denoising steps. The workflow is accelerated by applying an optimized diffusion model and GPU parallelization. The system was implemented as a prototype, and its effectiveness was validated through a user study and a large-scale pilot operation involving approximately 400 users. The results showed a significant preference for the proposed system over existing methods, confirming its potential for real-world photo booth applications. The proposed CINEMAPIC photo booth is expected to lead the way in a more creative and differentiated market, with potential for widespread application in various fields.

Cartoon Character Rendering based on Shading Capture of Concept Drawing (원화의 음영 캡쳐 기반 카툰 캐릭터 렌더링)

  • Byun, Hae-Won;Jung, Hye-Moon
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.8
    • /
    • pp.1082-1093
    • /
    • 2011
  • Traditional rendering of cartoon character cannot revive the feeling of concept drawings properly. In this paper, we propose capture technology to get toon shading model from the concept drawings and with this technique, we provide a new novel system to render 3D cartoon character. Benefits of this system is to cartoonize the 3D character according to saliency to emphasize the form of 3D character and further support the sketch-based user interface for artists to edit shading by post-production. For this, we generate texture automatically by RGB color sorting algorithm to analyze color distribution and rates of selected region. In the cartoon rendering process, we use saliency as a measure to determine visual importance of each area of 3d mesh and we provide a novel cartoon rendering algorithm based on the saliency of 3D mesh. For the fine adjustments of shading style, we propose a user interface that allow the artists to freely add and delete shading to a 3D model. Finally, this paper shows the usefulness of the proposed system through user evaluation.