• 제목/요약/키워드: Digital Coordinates

검색결과 311건 처리시간 0.031초

Highly efficient white organic light-emitting diodes using hybrid-spacer or/and codoped blue emitting layers

  • Seo, Ji-Hoon;Kim, Gu-Young;Hyung, Gun-Woo;Lee, Kum-Hee;Kim, You-Hyun;Kim, Woo-Young;Yoon, Seung-Soo;Kim, Young-Kwan
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2008년도 International Meeting on Information Display
    • /
    • pp.1219-1221
    • /
    • 2008
  • The Authors have demonstrated highly efficient white organic light-emitting diodes using hybrid-spacer which was inserted between each emitting layer or/and codoped blue emitting layers with the different functional material. The characteristics of WOLEDs showed the maximum external quantum efficiency of 13.8%, power efficiency of 33.66 lm/W, and Commission Internationale de I'Eclairage coordinates of (x=0.36, y=0.37), respectively.

  • PDF

Multilayer White Organic Light-Emitting Diodes with Blue Fluorescent and Red Phosphorescent Materials

  • Seo, Ji-Hoon;Kim, Jun-Ho;Lee, Kum-Hee;Kim, You-Hyun;Kim, Woo-Young;Yoon, Seung-Soo;Kim, Young-Kwan
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2006년도 6th International Meeting on Information Display
    • /
    • pp.1067-1070
    • /
    • 2006
  • We have demonstrated highly efficient WOLED with two separated emissive layers using a blue fluorescent dye and a red phosphorescent dye. The maximum luminous efficiency of the device was 11.2 cd/A at $20\;mA/cm^2$ and $CIE_{x,y}$ coordinates varied from (x = 0.33, y = 0.37) at 6V to (x = 0.25, y = 0.33) at 14V.

  • PDF

근거리 사진측량을 위한 CCD 사진기 검정에 관한 연구 (A Study on the CCD Camera Calibration for Close-Range Photogrammetry)

  • 유복모;이석군;최송욱;김기홍
    • 대한공간정보학회지
    • /
    • 제5권1호
    • /
    • pp.159-165
    • /
    • 1997
  • CCD 사진기는 수치영상을 별도의 처리과정 없이 실시간으로 얻을 수 있기 때문에 그 효용성이 날로 증대되고 있다. 본 연구에서는 CCD 사진기의 검정을 위해서 직접선형변환식과 방사 및 편심 왜곡을 검정하기 위한 검정식을 결합한 모형식을 구성하고 3차원 실험모형을 이용해서 검정계수들을 구하였다. CCD 사진기에 의한 수치영상을 상관계수를 이용하여 정합하고 대상물의 3차원 위치를 구함으로써 근거리 사진측량에서의 CCD 사진기 활용 가능성을 확인하였다.

  • PDF

파라메트릭 표피 재 조직화 (Re-organization of Parametric epidermis)

  • 박정주
    • 한국실내디자인학회:학술대회논문집
    • /
    • 한국실내디자인학회 2008년도 춘계학술발표대회 논문집
    • /
    • pp.46-49
    • /
    • 2008
  • This research does Complexity form, Interior epidermis cell re-organization, Object discovery that have correct numerical value concept by purpose. Research applied by Grid re-organization in form generation, Parameter variation of cell unit (morphor, tweener), Symbol, pattern of variation, self-organization cell substitution order. Representation through 3d digital modeler of polygon, Nurbs and street-sheet program(x,y,z coordinates & Network way of points) etc. of main work. Investigator specified numbers of U profiles*30, V point-20 that is 600 Paramaters individual in volume, and define circle radius of lighting in object, Projection size variously and tried difference. Transposition cell to point and Heightened brightness of color using pointillism of painting. Led lighting cell object is expressed being decoded by digital code.

  • PDF

Extraction of Ground Control Point (GCP) from SAR Image

  • Hong, S.H.;Lee, S.K.;Won, J.S.;Jung, H.S.
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1058-1060
    • /
    • 2003
  • A ground control point (GCP) is a point on the surface of Earth where image coord inates and map coordinates can be identified. The GCP is useful for the geometric correction of systematic and unsystematic errors usually contained in a remotely sensed data. Especially in case of synthetic aperture radar (SAR) data, it has serious geometric distortions caused by inherent side looking geometry. In addition, SAR images are usually severely corrupted by speckle noises so that it is difficult to identify ground control points. We developed a ground point extraction algorithm that has an improved capability. An application of radargrammetry to Daejon area in Korea was studied to acquire the geometric information. For the ground control point extraction algorithm, an ERS SAR data with precise Delft orbit information and rough digital elevation model (DEM) were used. We analyze the accuracy of the results from our algorithm by using digital map and GPS survey data.

  • PDF

Stereoscopic 3D Modelling Approach with KOMPSAT-2 Satellite Data

  • Tserennadmid, T.;Kim, Tae-Jung
    • 대한원격탐사학회지
    • /
    • 제25권3호
    • /
    • pp.205-214
    • /
    • 2009
  • This paper investigates stereo 3D viewing for linear pushbroom satellite images using the Orbit-Attitude Model proposed by Kim (2006) and using OpenGL graphic library in Digital Photogrammetry Workstation. 3D viewing is tested with KOMPSAT-2 satellite stereo images, a large number of GCPs (Ground control points) collected by GPS surveying and orbit-attitude sensor model as a rigorous sensor model. Comparison is carried out by two accuracy measurements: the accuracy of orbit-attitude modeling with bundle adjustment and accuracy analysis of errors in x and y parallaxes. This research result will help to understand the nature of 3D objects for high resolution satellite images, and we will be able to measure accurate 3D object space coordinates in virtual or real 3D environment.

Human and Robot Tracking Using Histogram of Oriented Gradient Feature

  • Lee, Jeong-eom;Yi, Chong-ho;Kim, Dong-won
    • Journal of Platform Technology
    • /
    • 제6권4호
    • /
    • pp.18-25
    • /
    • 2018
  • This paper describes a real-time human and robot tracking method in Intelligent Space with multi-camera networks. The proposed method detects candidates for humans and robots by using the histogram of oriented gradients (HOG) feature in an image. To classify humans and robots from the candidates in real time, we apply cascaded structure to constructing a strong classifier which consists of many weak classifiers as follows: a linear support vector machine (SVM) and a radial-basis function (RBF) SVM. By using the multiple view geometry, the method estimates the 3D position of humans and robots from their 2D coordinates on image coordinate system, and tracks their positions by using stochastic approach. To test the performance of the method, humans and robots are asked to move according to given rectangular and circular paths. Experimental results show that the proposed method is able to reduce the localization error and be good for a practical application of human-centered services in the Intelligent Space.

3D 카메라 기반 디지털 좌표 인식 기술 제안 (Proposal of 3D Camera-Based Digital Coordinate Recognition Technology)

  • 고준영;이강희
    • 한국컴퓨터정보학회:학술대회논문집
    • /
    • 한국컴퓨터정보학회 2022년도 제66차 하계학술대회논문집 30권2호
    • /
    • pp.229-230
    • /
    • 2022
  • 본 논문에서는 CNN Object Detection과 더불어 3D 카메라 기반 디지털 좌표 인식 기술을 제안한다. 이 기술은 3D Depth Camera인 Intel 사의 Realsense D455를 이용해 대상을 감지하고 분류하며 대상의 위치를 파악한다. 또한 이 기술은 기존의 Depth Camera 내장 거리와는 다르게 좌표를 인식하여 좌표간의 거리까지 계산이 가능하다. 또한 Tensorflow SSD 구조와의 메모리 공유를 통해 시스템의 자원 낭비를 줄이며, 속도를 높이는 멀티쓰레드를 탑재했다. 본 기술을 통해 좌표간의 거리를 계산함으로써 스포츠, 심리, 놀이, 산업 등 다양한 환경에서 활용할 수 있다.

  • PDF

How to utilize vegetation survey using drone image and image analysis software

  • Han, Yong-Gu;Jung, Se-Hoon;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • 제41권4호
    • /
    • pp.114-119
    • /
    • 2017
  • This study tried to analyze error range and resolution of drone images using a rotary wing by comparing them with field measurement results and to analyze stands patterns in actual vegetation map preparation by comparing drone images with aerial images provided by National Geographic Information Institute of Korea. A total of 11 ground control points (GCPs) were selected in the area, and coordinates of the points were identified. In the analysis of aerial images taken by a drone, error per pixel was analyzed to be 0.284 cm. Also, digital elevation model (DEM), digital surface model (DSM), and orthomosaic image were abstracted. When drone images were comparatively analyzed with coordinates of ground control points (GCPs), root mean square error (RMSE) was analyzed as 2.36, 1.37, and 5.15 m in the direction of X, Y, and Z. Because of this error, there were some differences in locations between images edited after field measurement and images edited without field measurement. Also, drone images taken in the stream and the forest and 51 and 25 cm resolution aerial images provided by the National Geographic Information Institute of Korea were compared to identify stands patterns. To have a standard to classify polygons according to each aerial image, image analysis software (eCognition) was used. As a result, it was analyzed that drone images made more precise polygons than 51 and 25 cm resolution images provided by the National Geographic Information Institute of Korea. Therefore, if we utilize drones appropriately according to characteristics of subject, we can have advantages in vegetation change survey and general monitoring survey as it can acquire detailed information and can take images continuously.

DSM과 다시점 거리영상의 3차원 등록을 이용한 무인이동차량의 위치 추정: 가상환경에서의 적용 (Localization of Unmanned Ground Vehicle using 3D Registration of DSM and Multiview Range Images: Application in Virtual Environment)

  • 박순용;최성인;장재석;정순기;김준;채정숙
    • 제어로봇시스템학회논문지
    • /
    • 제15권7호
    • /
    • pp.700-710
    • /
    • 2009
  • A computer vision technique of estimating the location of an unmanned ground vehicle is proposed. Identifying the location of the unmaned vehicle is very important task for automatic navigation of the vehicle. Conventional positioning sensors may fail to work properly in some real situations due to internal and external interferences. Given a DSM(Digital Surface Map), location of the vehicle can be estimated by the registration of the DSM and multiview range images obtained at the vehicle. Registration of the DSM and range images yields the 3D transformation from the coordinates of the range sensor to the reference coordinates of the DSM. To estimate the vehicle position, we first register a range image to the DSM coarsely and then refine the result. For coarse registration, we employ a fast random sample matching method. After the initial position is estimated and refined, all subsequent range images are registered by applying a pair-wise registration technique between range images. To reduce the accumulation error of pair-wise registration, we periodically refine the registration between range images and the DSM. Virtual environment is established to perform several experiments using a virtual vehicle. Range images are created based on the DSM by modeling a real 3D sensor. The vehicle moves along three different path while acquiring range images. Experimental results show that registration error is about under 1.3m in average.