• Title/Summary/Keyword: 카메라 모델

Search Result 1,047, Processing Time 0.023 seconds

DESIGN OF THE OPTICAL SYSTEM FOR A PROTOMODEL OF SPACE INFRARED CRYOGENIC SYSTEM (우주탑재용 적외선카메라 시험모델의 광학계 설계)

  • Lee, Dae-Hee;Pak, Soo-Jong;Yuk, In-Soo;Nam, Uk-Won;Jin, Ho;Lee, Sung-Ho;Han, Jeong-Yeol;Yang, Hyung-Suk;Kim, Dong-Lak;Kim, Geon-Hee;Park, Seong-Je;Kim, Byung-Hyuk;Jeong, Han
    • Journal of Astronomy and Space Sciences
    • /
    • v.22 no.4
    • /
    • pp.473-482
    • /
    • 2005
  • Many technical challenges are being tried for a large space infrared telescope, which is one of the major objectives of the Strategic Technology Road Map (STRM) of KASI (Korea Astronomy and Space Science Institute), As one of these challenges, KASI and KBSI (Korea Basic Science Institute) have started a cooperation project for developing a space infrared cryogenic system with KIMM (Korea Institute of Machinery as Materials) and i3system co. In this paper, we generate optical requirements for the Protomodel of Space Infrared Cryogenic System (PSICS), and design a single lens optical system with a bandpass of $3.8\~4.8{\mu}m$, a field of view of $15^{\circ}\times12^{\circ}$, and an angular resolution of $0.047^{\circ}$, to develop a further complex optical system.

Development of Remote Measurement Method for Reinforcement Information in Construction Field Using 360 Degrees Camera (360도 카메라 기반 건설현장 철근 배근 정보 원격 계측 기법 개발)

  • Lee, Myung-Hun;Woo, Ukyong;Choi, Hajin;Kang, Su-min;Choi, Kyoung-Kyu
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.26 no.6
    • /
    • pp.157-166
    • /
    • 2022
  • Structural supervision on the construction site has been performed based on visual inspection, which is highly labor-intensive and subjective. In this study, the remote technique was developed to improve the efficiency of the measurements on rebar spacing using a 360° camera and reconstructed 3D models. The proposed method was verified by measuring the spacings in reinforced concrete structure, where the twelve locations in the construction site (265 m2) were scanned within 20 seconds per location and a total of 15 minutes was taken. SLAM, consisting of SIFT, RANSAC, and General framework graph optimization algorithms, produces RGB-based 3D and 3D point cloud models, respectively. The minimum resolution of the 3D point cloud was 0.1mm while that of the RGB-based 3D model was 10 mm. Based on the results from both 3D models, the measurement error was from 10.8% to 0.3% in the 3D point cloud and from 28.4% to 3.1% in the RGB-based 3D model. The results demonstrate that the proposed method has great potential for remote structural supervision with respect to its accuracy and objectivity.

Automatic Generation of 3D Face Model from Trinocular Images (Trinocular 영상을 이용한 3D 얼굴 모델 자동 생성)

  • Yi, Kwang-Do;Ahn, Sang-Chul;Kwon, Yong-Moo;Ko, Han-Seok;Kim, Hyoung-Gon
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.36S no.7
    • /
    • pp.104-115
    • /
    • 1999
  • This paper proposes an efficient method for 3D modeling of a human face from trinocular images by reconstructing face surface using range data. By using a trinocular camera system, we mitigated the tradeoff between the occlusion problem and the range resolution limitation which is the critical limitation in binocular camera system. We also propose an MPC_MBS (Matching Pixel Count Multiple Baseline Stereo) area-based matching method to reduce boundary overreach phenomenon and to improve both of accuracy and precision in matching. In this method, the computing time can be reduced significantly by removing the redundancies. In the model generation sub-pixel accurate surface data are achieved by 2D interpolation of disparity values, and are sampled to make regular triangular meshes. The data size of the triangular mesh model can be controlled by merging the vertices that lie on the same plane within user defined error threshold.

  • PDF

Deep Learning Model Selection Platform for Object Detection (사물인식을 위한 딥러닝 모델 선정 플랫폼)

  • Lee, Hansol;Kim, Younggwan;Hong, Jiman
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.66-73
    • /
    • 2019
  • Recently, object recognition technology using computer vision has attracted attention as a technology to replace sensor-based object recognition technology. It is often difficult to commercialize sensor-based object recognition technology because such approach requires an expensive sensor. On the other hand, object recognition technology using computer vision may replace sensors with inexpensive cameras. Moreover, Real-time recognition is viable due to the growth of CNN, which is actively introduced into other fields such as IoT and autonomous vehicles. Because object recognition model applications demand expert knowledge on deep learning to select and learn the model, such method, however, is challenging for non-experts to use it. Therefore, in this paper, we analyze the structure of deep - learning - based object recognition models, and propose a platform that can automatically select a deep - running object recognition model based on a user 's desired condition. We also present the reason we need to select statistics-based object recognition model through conducted experiments on different models.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Automated Bar Placing Model Generation for Augmented Reality Using Recognition of Reinforced Concrete Details (부재 일람표 도면 인식을 활용한 증강현실 배근모델 자동 생성)

  • Park, U-Yeol;An, Sung-Hoon
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.3
    • /
    • pp.289-296
    • /
    • 2020
  • This study suggests a methodology for automatically extracting placing information from 2D reinforced concrete details drawings and generating a 3D reinforcement placing model to develop a mobile augmented reality for bar placing work. To make it easier for users to acquire placing information, it is suggested that users takes pictures of structural drawings using a camera built into a mobile device and extract placing information using vision recognition and the OCR(Optical Character Registration) tool. In addition, an augmented reality app is implemented using the game engine to allow users to automatically generate 3D reinforcement placing model and review the 3D models by superimposing them with real images. Details are described for application to the proposed methodology using the previously developed programming tools, and the results of implementing reinforcement augmented reality models for typical members at construction sites are reviewed. It is expected that the methodology presented as a result of application can be used for learning bar placing work or construction review.

Dynamic Bayesian Network based Two-Hand Gesture Recognition (동적 베이스망 기반의 양손 제스처 인식)

  • Suk, Heung-Il;Sin, Bong-Kee
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.4
    • /
    • pp.265-279
    • /
    • 2008
  • The idea of using hand gestures for human-computer interaction is not new and has been studied intensively during the last dorado with a significant amount of qualitative progress that, however, has been short of our expectations. This paper describes a dynamic Bayesian network or DBN based approach to both two-hand gestures and one-hand gestures. Unlike wired glove-based approaches, the success of camera-based methods depends greatly on the image processing and feature extraction results. So the proposed method of DBN-based inference is preceded by fail-safe steps of skin extraction and modeling, and motion tracking. Then a new gesture recognition model for a set of both one-hand and two-hand gestures is proposed based on the dynamic Bayesian network framework which makes it easy to represent the relationship among features and incorporate new information to a model. In an experiment with ten isolated gestures, we obtained the recognition rate upwards of 99.59% with cross validation. The proposed model and the related approach are believed to have a strong potential for successful applications to other related problems such as sign languages.

An Implementation of Markerless Augmented Reality Using Efficient Reference Data Sets (효율적인 레퍼런스 데이터 그룹의 활용에 의한 마커리스 증강현실의 구현)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.11
    • /
    • pp.2335-2340
    • /
    • 2009
  • This paper presents how to implement Markerless Augmented Reality and how to create and apply reference data sets. There are three parts related with implementation: setting camera, creation of reference data set, and tracking. To create effective reference data sets, we need a 3D model such as CAD model. It is also required to create reference data sets from various viewpoints. We extract the feature points from the mode1 image and then extract 3D positions corresponding to the feature points using ray tracking. These 2D/3D correspondence point sets constitute a reference data set of the model. Reference data sets are constructed for various viewpoints of the model. Fast tracking can be done using a reference data set the most frequently matched with feature points of the present frame and model data near the reference data set.

g, r, i 필터에 대한 Yonsei-Yale Isochrones의 분석과 적용

  • Im, Dong-Uk;Han, Sang-Il;Cheon, Sang-Hyeon;Jeong, Mi-Yeong;Jang, Cho-Rong;Han, Mi-Hwa;Kim, Myo-Jin;Son, Yeong-Jong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.1
    • /
    • pp.53.1-53.1
    • /
    • 2010
  • $Y^2$(Yonsei-Yale) Isochrones은 Yale의 항성 진화 계산을 이용한 정밀한 등연령곡선으로 구상성단의 연구에 널리 사용되고 있다. 이번 연구에서는 $Y^2$-Isochrones를 Kurucz 모형을 이용한 색변환을 통해서 SDSS로 널리 알려진 ugriz 필터 체계에 대해 적용하고 실제 관측 결과와 비교함으로써 타당성을 검증하고자 한다. 우선 ugriz 필터의 등연령곡선을 제공하는 BaSTI와 DSEP, Padova 모델간의 비교를 통해 $Y^2$-Isochorones이 MSTO에서 g-r값이 다른 모델에 비해 0.05정도 큼을 확인하였다. 또한 CFHT 3.6m 망원경의 가시광 카메라 MegaCam으로부터 얻은 중원소 함량이 낮은 다섯 개의 구상성단 M15, M30, M53, NGC 5053, NGC 5466에 대한 g, r, i 필터의 색등급도에 각각의 등연령곡선을 적용하여 모델에 따른 구상성단의 특성을 분석하였다. $Y^2$-isochrones를 이용한 (g-r, r) CMD의 분석 결과, BaSTI와 DSEP 모델에 비해 0.1~0.3만큼 거리지수가 크고, 성단의 나이는 1~3Gyr 정도 어리게 측정됨을 확인할 수 있었다. 더불어 SDSS 관측으로부터 얻어진 구상성단에 대한 색등급도와 비교 분석과정을 수행하여, 다른 망원경에서의 사용가능 여부를 확인하였다. 이 연구를 통하여 각 모델에 따른 등연령곡선의 차이를 확인하고, 현재 여러 망원경에서 사용되고, 앞으로 여러 대형망원경에서 사용될 ugriz 필터 체계에 $Y^2$-Isochrones를 적용할 수 있는 타당성을 제시하고자 한다.

  • PDF

Comparative Study of Modeling of Hand Motion by Neural Network and Kernel Regression (손 동작을 모사하기 위한 신경회로망과 커널 회귀의 모델링 비교 연구)

  • Yang, Hac-Jin;Kim, Hyung-Tae;Kim, Seong-Kun
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.34 no.4
    • /
    • pp.399-405
    • /
    • 2010
  • The grasping motion of a person's hand for a simplified degree of freedom was modeled by using the photographic motion measured by a high-speed camera. The mathematical expression of distal interphalangeal (DIP) motion was developed by using relation models of the metacarpophalangeal (MCP) and proximal interphalangeal (PIP) motions to reduce the degree of freedom. The mathematical expression for humanoid-hand operation obtained using a learning algorithm with a neural network and using a kernel regression model were compared. A feasible model of hand operation was obtained on the basis of comparative data analysis by using the kernel regression model.