• 제목/요약/키워드: Captured Image

Search Result 988, Processing Time 0.028 seconds

Recognition of Individual Holstein Cattle by Imaging Body Patterns

  • Kim, Hyeon T.;Choi, Hong L.;Lee, Dae W.;Yoon, Yong C.
    • Asian-Australasian Journal of Animal Sciences
    • /
    • v.18 no.8
    • /
    • pp.1194-1198
    • /
    • 2005
  • A computer vision system was designed and validated to recognize an individual Holstein cattle by processing images of their body patterns. This system involves image capture, image pre-processing, algorithm processing, and an artificial neural network recognition algorithm. Optimum management of individuals is one of the most important factors in keeping cattle healthy and productive. In this study, an image-processing system was used to recognize individual Holstein cattle by identifying the body-pattern images captured by a charge-coupled device (CCD). A recognition system was developed and applied to acquire images of 49 cattles. The pixel values of the body images were transformed into input data comprising binary signals for the neural network. Images of the 49 cattle were analyzed to learn input layer elements, and ten cattles were used to verify the output layer elements in the neural network by using an individual recognition program. The system proved to be reliable for the individual recognition of cattles in natural light.

Registration of Dental Range Images from a Intraoral Scanner (Intraoral Scanner로 촬영된 치아 이미지의 정렬)

  • Ko, Min Soo;Park, Sang Chul
    • Korean Journal of Computational Design and Engineering
    • /
    • v.21 no.3
    • /
    • pp.296-305
    • /
    • 2016
  • This paper proposes a framework to automatically align Dental range image captured by depth sensors like the Microsoft Kinect. Aligning dental images by intraoral scanning technology is a difficult problem for applications requiring accurate model of dental-scan datasets with efficiency in computation time. The most important thing in dental scanning system is accuracy of the dental prosthesis. Previous approaches in intraoral scanning uses a Z-buffer ICP algorithm for fast registration, but it is relatively not accurate and it may cause cumulative errors. This paper proposes additional Alignment using the rough result comes after intraoral scanning alignment. It requires that Each Depth Image of the total set shares some overlap with at least one other Depth image. This research implements the automatically additional alignment system that aligns all depth images into Completed model by computing a network of pairwise registrations. The order of the each individual transformation is derived from a global network and AABB box overlap detection methods.

The Crowd Activity Analysis based on Perspective Effect in Network Camera (네트워크 카메라 영상에서 원근감 효과를 고려한 군집 움직임 분석)

  • Lee, Sang-Geol;Park, Hyun-Jun;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2008.10a
    • /
    • pp.415-418
    • /
    • 2008
  • This paper presents a method for moving objects detection, analysis and expression how much move as numerical value from the image which is captured by a network camera. To perform this method, we process few kinds of pre-processing to remove noise that are getting background image, difference image, binarization and so on. And to consider perspective effect, we propose modified ART2 algorithm. Finally, we express the result of ATR2 clustering as numerical value. This method is robust to size of object which is changed by perspective effect.

  • PDF

HAND GESTURE INTERFACE FOR WEARABLE PC

  • Nishihara, Isao;Nakano, Shizuo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.664-667
    • /
    • 2009
  • There is strong demand to create wearable PC systems that can support the user outdoors. When we are outdoors, our movement makes it impossible to use traditional input devices such as keyboards and mice. We propose a hand gesture interface based on image processing to operate wearable PCs. The semi-transparent PC screen is displayed on the head mount display (HMD), and the user makes hand gestures to select icons on the screen. The user's hand is extracted from the images captured by a color camera mounted above the HMD. Since skin color can vary widely due to outdoor lighting effects, a key problem is accurately discrimination the hand from the background. The proposed method does not assume any fixed skin color space. First, the image is divided into blocks and blocks with similar average color are linked. Contiguous regions are then subjected to hand recognition. Blocks on the edges of the hand region are subdivided for more accurate finger discrimination. A change in hand shape is recognized as hand movement. Our current input interface associates a hand grasp with a mouse click. Tests on a prototype system confirm that the proposed method recognizes hand gestures accurately at high speed. We intend to develop a wider range of recognizable gestures.

  • PDF

Fake Face Detection and Falsification Detection System Based on Face Recognition (얼굴 인식 기반 위변장 감지 시스템)

  • Kim, Jun Young;Cho, Seongwon
    • Smart Media Journal
    • /
    • v.4 no.4
    • /
    • pp.9-17
    • /
    • 2015
  • Recently the need for advanced security technologies are increasing as the occurrence of intelligent crime is growing fastly. Previous liveness detection and fake face detection methods are required for the improvement of accuracy in order to be put to practical use. In this paper, we propose a new liveness detection method using pupil reflection, and new fake image detection using Adaboost detector. The proposed system detects eyes based on multi-scale Gabor feature vector in the first stage, The template matching plays a role in determining the allowed eye area. And then, the reflected image in the pupil is used to decide whether or not the captured image is live or not. Experimental results indicate that the proposed method is superior to the previous methods in the detection accuracy of fake images.

Development of Peripheral Devices on the Endoscopic Surgery System (내시경 수술시스템의 주변장치 개발)

  • Lee, Young-Mook;Song, Chul-Gyu;Lee, Sang-Min;Kim, Won-Ky
    • Proceedings of the KOSOMBE Conference
    • /
    • v.1995 no.05
    • /
    • pp.164-166
    • /
    • 1995
  • The objectives of study are to develop a peripheral device on the endoscopic surgery system. These systems are consist of the following units. They are a color monitor of high resolution, light source, computer system and endoscopic camera with a C-mount head, irrigator, color video printer, Super VHS recorder and a system rack. The color monitor is a NTSC monitor for monitoring the image projected of the surgical section. The lightsource is necessary to irradiate the interior of a body via an optic fiber, The light projector will adapt the brightness in accordance with changing distance from the object. A miniature camera using a color CCD chip and computer system is used to capture and control an image of the surgical section[1]. The video printer is a 300 DPI resolution using thermal sublimation methods, which is developed by Samsung Electronics Co., Ltd. The specification of the endoscopic data management system is consist of storage of a captured image and pathological database of patients [2-4].

  • PDF

Position Improvement of a Human-Following Mobile Robot Using Image Information of Walking Human (보행자의 영상정보를 이용한 인간추종 이동로봇의 위치 개선)

  • Jin Tae-Seok;Lee Dong-Heui;Lee Jang-Myung
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.5
    • /
    • pp.398-405
    • /
    • 2005
  • The intelligent robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, robots need to recognize their position and posture in known environment as well as unknown environment. Moreover, it is necessary for their localization to occur naturally. It is desirable for a robot to estimate of his position by solving uncertainty for mobile robot navigation, as one of the best important problems. In this paper, we describe a method for the localization of a mobile robot using image information of a moving object. This method combines the observed position from dead-reckoning sensors and the estimated position from the images captured by a fixed camera to localize a mobile robot. Using a priori known path of a moving object in the world coordinates and a perspective camera model, we derive the geometric constraint equations which represent the relation between image frame coordinates for a moving object and the estimated robot's position. Also, the control method is proposed to estimate position and direction between the walking human and the mobile robot, and the Kalman filter scheme is used for the estimation of the mobile robot localization. And its performance is verified by the computer simulation and the experiment.

Design and Implementation of $160\times192$ pixel array capacitive type fingerprint sensor

  • Nam Jin-Moon;Jung Seung-Min;Lee Moon-Key
    • Proceedings of the IEEK Conference
    • /
    • summer
    • /
    • pp.82-85
    • /
    • 2004
  • This paper proposes an advanced circuit for the capacitive type fingerprint sensor signal processing and an effective isolation structure for minimizing an electrostatic discharge(ESD) influence and for removing a signal coupling noise of each sensor pixel. The proposed detection circuit increases the voltage difference between a ridge and valley about $80\%$ more than old circuit. The test chip is composed of $160\;\times\;192$ array sensing cells $(9,913\times11,666\;um^2).$ The sensor plate area is $58\;\times\;58\;um^2$ and the pitch is 60um. The image resolution is 423 dpi. The chip was fabricated on a 0.35um standard CMOS process. It successfully captured a high-quality fingerprint image and performed the registration and identification processing. The sensing and authentication time is 1 sec(.) with the average power consumption of 10 mW at 3.0V. The reveal ESD tolerance is obtained at the value of 4.5 kV.

  • PDF

Hole-Filling Methods Using Depth and Color Information for Generating Multiview Images

  • Nam, Seung-Woo;Jang, Kyung-Ho;Ban, Yun-Ji;Kim, Hye-Sun;Chien, Sung-Il
    • ETRI Journal
    • /
    • v.38 no.5
    • /
    • pp.996-1007
    • /
    • 2016
  • This paper presents new hole-filling methods for generating multiview images by using depth image based rendering (DIBR). Holes appear in a depth image captured from 3D sensors and in the multiview images rendered by DIBR. The holes are often found around the background regions of the images because the background is prone to occlusions by the foreground objects. Background-oriented priority and gradient-oriented priority are also introduced to find the order of hole-filling after the DIBR process. In addition, to obtain a sample to fill the hole region, we propose the fusing of depth and color information to obtain a weighted sum of two patches for the depth (or rendered depth) images and a new distance measure to find the best-matched patch for the rendered color images. The conventional method produces jagged edges and a blurry phenomenon in the final results, whereas the proposed method can minimize them, which is quite important for high fidelity in stereo imaging. The experimental results show that, by reducing these errors, the proposed methods can significantly improve the hole-filling quality in the multiview images generated.

Global Localization of Mobile Robots Using Omni-directional Images (전방위 영상을 이용한 이동 로봇의 전역 위치 인식)

  • Han, Woo-Sup;Min, Seung-Ki;Roh, Kyung-Shik;Yoon, Suk-June
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.31 no.4
    • /
    • pp.517-524
    • /
    • 2007
  • This paper presents a global localization method using circular correlation of an omni-directional image. The localization of a mobile robot, especially in indoor conditions, is a key component in the development of useful service robots. Though stereo vision is widely used for localization, its performance is limited due to computational complexity and its narrow view angle. To compensate for these shortcomings, we utilize a single omni-directional camera which can capture instantaneous $360^{\circ}$ panoramic images around a robot. Nodes around a robot are extracted by the correlation coefficients of CHL (Circular Horizontal Line) between the landmark and the current captured image. After finding possible near nodes, the robot moves to the nearest node based on the correlation values and the positions of these nodes. To accelerate computation, correlation values are calculated based on Fast Fourier Transforms. Experimental results and performance in a real home environment have shown the feasibility of the method.