• Title/Summary/Keyword: 좌표 인식

Search Result 467, Processing Time 0.022 seconds

Study for applying the augmented reality onto postage stamps (우표의 증강현실 적용에 관한 연구)

  • Lee, Ki Ho
    • Cartoon and Animation Studies
    • /
    • s.33
    • /
    • pp.503-529
    • /
    • 2013
  • The commemorative AR postage stamps which are the world first presented at The YEOSU EXPO 2012 has had meaning of communicating with future in this present from a convergence that the most analog medium is using now and that the AR is cutting edge of digital technology. The AR stamps printed 10 kind out of 33 commemorative stamps. These have great significance that is artistic value than that is world first. The applied AR images are not only expressed 3D real images but also artic represented and signifying each stamp images from visualized creativity process, and build 'new art space' that is new concept between on real(analog) and virtual(digital). This study analyzes meaning of images and then makes concept of AR contents design. The processing is designed and considered the meaning of architectures and environments, and the regional specific feature of the Yeosu with surrealistic graphic concept. The 10 of deducted images were expressed after AR coding such as visual arts. This study realized markerless 3D image tracking AR stamps and deducted research result are; the first, it was able to figure out how to realize AR in the process of registering the reference images, coordinating transformation, and hybriding AR on the stamps for the mobile devices. The second, it was able to be seeked a possibility of new virtual exhibition space. The third, it was able to know possibility of satisfaction of immersing with visual formativeness and usability with informativity.

The Obstacle Avoidance Algorithm of Mobile Robot using Line Histogram Intensity (Line Histogram Intensity를 이용한 이동로봇의 장애물 회피 알고리즘)

  • 류한성;최중경;구본민;박무열;방만식
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.8
    • /
    • pp.1365-1373
    • /
    • 2002
  • In this paper, we present two types of vision algorithm that mobile robot has CCD camera. for obstacle avoidance. This is simple algorithm that compare with grey level from input images. Also, The mobile robot depend on image processing and move command from PC host. we has been studied self controlled mobile robot system with CCD camera. This system consists of digital signal processor, step motor, RF module and CCD camera. we used wireless RF module for movable command transmitting between robot and host PC. This robot go straight until recognize obstacle from input image that preprocessed by edge detection, converting, thresholding. And it could avoid the obstacle when recognize obstacle by line histogram intensity. Host PC measurement wave from various line histogram each 20 pixel. This histogram is (x, y) value of pixel. For example, first line histogram intensity wave from (0, 0) to (0, 197) and last wave from (280, 0) to (2n, 197. So we find uniform wave region and nonuniform wave region. The period of uniform wave is obstacle region. we guess that algorithm is very useful about moving robot for obstacle avoidance.

The Development and Application of the New Model of Moon Phases (새로운 달 위상 모형의 개발과 그 적용)

  • Chae, Dong-Hyun
    • Journal of Korean Elementary Science Education
    • /
    • v.27 no.4
    • /
    • pp.385-398
    • /
    • 2008
  • The purpose of this study is to understand the effect of The Model of Phases of the Moon on conception changes for preservice teachers. The researcher interviewed two preservice teachers under the agreement with them on their participation in the research just before he performed a class using The New Model of Phases of the Moon. The post-interview with the same content as the pre-interview was preformed one month later. The main content of the interview is as follows; 'Explain the shape of the Moon by drawing it.', 'Explain the relative different position among the Sun, Earth, and Moon depending on phases of the Moon by drawing them.', 'What do you think of the cause of phases of the Moon?', 'Draw a picture to explain why we always see only one side of the moon.' The results of the research are as follows. First, the class with New Model of Phases of the Moon was able to perceive the relationship of Sun, Earth, and Moon in three-dimensions rather than in two-dimensions and it helped to change their misconception that the Moon's shadow causes the Moon's shape. Secondly, the class with New Model of Phases of the Moon helped preservice teachers understand better the different positional relationships among the Sun, Earth, and Moon depending on the Moon shapes. Third, the class adopting the New Model of Phases of the Moon help preservice teachers form scientific conceptions on the causes of phase change of the Moon. Fourth, the class with the New Model of Phases of the Moon is not appropriate for explaining the reason why only one face of the Moon is seen. Based upon the results above, the researcher realized the limitation of this model and suggested that this model would help learners understand phase change of the Moon and increase space perception ability.

  • PDF

A Hybrid Approach of Efficient Facial Feature Detection and Tracking for Real-time Face Direction Estimation (실시간 얼굴 방향성 추정을 위한 효율적인 얼굴 특성 검출과 추적의 결합방법)

  • Kim, Woonggi;Chun, Junchul
    • Journal of Internet Computing and Services
    • /
    • v.14 no.6
    • /
    • pp.117-124
    • /
    • 2013
  • In this paper, we present a new method which efficiently estimates a face direction from a sequences of input video images in real time fashion. For this work, the proposed method performs detecting the facial region and major facial features such as both eyes, nose and mouth by using the Haar-like feature, which is relatively not sensitive against light variation, from the detected facial area. Then, it becomes able to track the feature points from every frame using optical flow in real time fashion, and determine the direction of the face based on the feature points tracked. Further, in order to prevent the erroneously recognizing the false positions of the facial features when if the coordinates of the features are lost during the tracking by using optical flow, the proposed method determines the validity of locations of the facial features using the template matching of detected facial features in real time. Depending on the correlation rate of re-considering the detection of the features by the template matching, the face direction estimation process is divided into detecting the facial features again or tracking features while determining the direction of the face. The template matching initially saves the location information of 4 facial features such as the left and right eye, the end of nose and mouse in facial feature detection phase and reevaluated these information when the similarity measure between the stored information and the traced facial information by optical flow is exceed a certain level of threshold by detecting the new facial features from the input image. The proposed approach automatically combines the phase of detecting facial features and the phase of tracking features reciprocally and enables to estimate face pose stably in a real-time fashion. From the experiment, we can prove that the proposed method efficiently estimates face direction.

Practical Reading of Gilles Deleuze on Frame from Filmmaking Perspective (들뢰즈의 프레임: 영화제작 관점에서 읽기)

  • Kim, Jung-Ho;Kim, Jae Sung
    • The Journal of the Korea Contents Association
    • /
    • v.19 no.11
    • /
    • pp.527-548
    • /
    • 2019
  • For Deleuze, the frame is a closed system with numerous subsets of information. the frame can be defined by mathematics and physics. it is a geometric system of equilibrium and harmony with variables or coordinates. like paintings, Linear perspective represents a three-dimensional depth in a two-dimensional plane through vanishing points, horizontal lines in the frame. Linear perspective makes it possible to assume the infinity towards the vanishing point and the infinity towards the outside of the frame, the opposite of the vanishing point. Not only figures and lines in the drawing paper, but also the space between the figures and lines in the drawing paper was recognized. that is space, the 3rd dimension. with the centripetal force and centrifugal force of the frame, frame follow the physical rules of power and movement. de framing is against the dominant linear perspective and central tendency of the frame. The film contains four-dimensional time while reproducing three-dimensional space in two dimensions. It may be that the outside of the frame, or outside the field of view, contains thought, the fifth dimension.

Producing True Orthophoto Using Multi-Dimensional Spatial Information (다차원공간정보를 이용한 실감정사영상 제작 방안)

  • Lee, Hyun-Jik
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.3
    • /
    • pp.241-253
    • /
    • 2008
  • Recently, it is appearing that new paradigm of urban planning that ubiquitous concept such as the u-City, uECO-City is introduced while is rising necessity about third dimensional geo-spatial information of high quality for urban area. Orthophoto can manufacture by expense and time that is less easily than digital map using personal computer even if is not highly technician and according as position relation between manmade feature and natural feature is equal, can get information of distance, angle, horizontal and vertical position coordinate of topographic, area etc.. directly through orthophoto. Also, visual effect is good that orthophoto is expressed by image and interpretation is easy to detailed part of topographic. Manufacture and practical use are consisting in various field, for it is having advantage that can recognize information effectively than digital map. Therefore, this study presents a way of generating a detailed DSM for producing a true-orthphoto of the urban area, and this study also presents a way to produce an optimum true-orthophoto for an urban area by investigating through experiment the optimum variable for the geometric and radiometric correction of the orthophoto. This study also examined the potentials of the thesis by building a 3-dimensional city model of the model region with the above thesis on optimum generating method.

Process Development for Optimizing Sensor Placement Using 3D Information by LiDAR (LiDAR자료의 3차원 정보를 이용한 최적 Sensor 위치 선정방법론 개발)

  • Yu, Han-Seo;Lee, Woo-Kyun;Choi, Sung-Ho;Kwak, Han-Bin;Kwak, Doo-Ahn
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.18 no.2
    • /
    • pp.3-12
    • /
    • 2010
  • In previous studies, the digital measurement systems and analysis algorithms were developed by using the related techniques, such as the aerial photograph detection and high resolution satellite image process. However, these studies were limited in 2-dimensional geo-processing. Therefore, it is necessary to apply the 3-dimensional spatial information and coordinate system for higher accuracy in recognizing and locating of geo-features. The objective of this study was to develop a stochastic algorithm for the optimal sensor placement using the 3-dimensional spatial analysis method. The 3-dimensional information of the LiDAR was applied in the sensor field algorithm based on 2- and/or 3-dimensional gridded points. This study was conducted with three case studies using the optimal sensor placement algorithms; the first case was based on 2-dimensional space without obstacles(2D-non obstacles), the second case was based on 2-dimensional space with obstacles(2D-obstacles), and lastly, the third case was based on 3-dimensional space with obstacles(3D-obstacles). Finally, this study suggested the methodology for the optimal sensor placement - especially, for ground-settled sensors - using the LiDAR data, and it showed the possibility of algorithm application in the information collection using sensors.

Automation of Bio-Industrial Process Via Tele-Task Command(I) -identification and 3D coordinate extraction of object- (원격작업 지시를 이용한 생물산업공정의 생력화 (I) -대상체 인식 및 3차원 좌표 추출-)

  • Kim, S. C.;Choi, D. Y.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.26 no.1
    • /
    • pp.21-28
    • /
    • 2001
  • Major deficiencies of current automation scheme including various robots for bioproduction include the lack of task adaptability and real time processing, low job performance for diverse tasks, and the lack of robustness of take results, high system cost, failure of the credit from the operator, and so on. This paper proposed a scheme that could solve the current limitation of task abilities of conventional computer controlled automatic system. The proposed scheme is the man-machine hybrid automation via tele-operation which can handle various bioproduction processes. And it was classified into two categories. One category was the efficient task sharing between operator and CCM(computer controlled machine). The other was the efficient interface between operator and CCM. To realize the proposed concept, task of the object identification and extraction of 3D coordinate of an object was selected. 3D coordinate information was obtained from camera calibration using camera as a measurement device. Two stereo images were obtained by moving a camera certain distance in horizontal direction normal to focal axis and by acquiring two images at different locations. Transformation matrix for camera calibration was obtained via least square error approach using specified 6 known pairs of data points in 2D image and 3D world space. 3D world coordinate was obtained from two sets of image pixel coordinates of both camera images with calibrated transformation matrix. As an interface system between operator and CCM, a touch pad screen mounted on the monitor and remotely captured imaging system were used. Object indication was done by the operator’s finger touch to the captured image using the touch pad screen. A certain size of local image processing area was specified after the touch was made. And image processing was performed with the specified local area to extract desired features of the object. An MS Windows based interface software was developed using Visual C++6.0. The software was developed with four modules such as remote image acquisiton module, task command module, local image processing module and 3D coordinate extraction module. Proposed scheme shoed the feasibility of real time processing, robust and precise object identification, and adaptability of various job and environments though selected sample tasks.

  • PDF

Vehicle Headlight and Taillight Recognition in Nighttime using Low-Exposure Camera and Wavelet-based Random Forest (저노출 카메라와 웨이블릿 기반 랜덤 포레스트를 이용한 야간 자동차 전조등 및 후미등 인식)

  • Heo, Duyoung;Kim, Sang Jun;Kwak, Choong Sub;Nam, Jae-Yeal;Ko, Byoung Chul
    • Journal of Broadcast Engineering
    • /
    • v.22 no.3
    • /
    • pp.282-294
    • /
    • 2017
  • In this paper, we propose a novel intelligent headlight control (IHC) system which is durable to various road lights and camera movement caused by vehicle driving. For detecting candidate light blobs, the region of interest (ROI) is decided as front ROI (FROI) and back ROI (BROI) by considering the camera geometry based on perspective range estimation model. Then, light blobs such as headlights, taillights of vehicles, reflection light as well as the surrounding road lighting are segmented using two different adaptive thresholding. From the number of segmented blobs, taillights are first detected using the redness checking and random forest classifier based on Haar-like feature. For the headlight and taillight classification, we use the random forest instead of popular support vector machine or convolutional neural networks for supporting fast learning and testing in real-life applications. Pairing is performed by using the predefined geometric rules, such as vertical coordinate similarity and association check between blobs. The proposed algorithm was successfully applied to various driving sequences in night-time, and the results show that the performance of the proposed algorithms is better than that of recent related works.

An Object Detection and Tracking System using Fuzzy C-means and CONDENSATION (Fuzzy C-means와 CONDENSATION을 이용한 객체 검출 및 추적 시스템)

  • Kim, Jong-Ho;Kim, Sang-Kyoon;Hang, Goo-Seun;Ahn, Sang-Ho;Kang, Byoung-Doo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.4
    • /
    • pp.87-98
    • /
    • 2011
  • Detecting a moving object from videos and tracking it are basic and necessary preprocessing steps in many video systems like object recognition, context aware, and intelligent visual surveillance. In this paper, we propose a method that is able to detect a moving object quickly and accurately in a condition that background and light change in a real time. Furthermore, our system detects strongly an object in a condition that the target object is covered with other objects. For effective detection, effective Eigen-space and FCM are combined and employed, and a CONDENSATION algorithm is used to trace a detected object strongly. First, training data collected from a background image are linear-transformed using Principal Component Analysis (PCA). Second, an Eigen-background is organized from selected principal components having excellent discrimination ability on an object and a background. Next, an object is detected with FCM that uses a convolution result of the Eigen-vector of previous steps and the input image. Finally, an object is tracked by using coordinates of an detected object as an input value of condensation algorithm. Images including various moving objects in a same time are collected and used as training data to realize our system that is able to be adapted to change of light and background in a fixed camera. The result of test shows that the proposed method detects an object strongly in a condition having a change of light and a background, and partial movement of an object.