• Title/Summary/Keyword: Computer Vision system

Search Result 1,064, Processing Time 0.032 seconds

Development of Computer Vision System for Individual Recognition and Feature Information of Cow (II) - Analysis of body parameters using stereo image - (젖소의 개체인식 및 형상 정보화를 위한 컴퓨터 시각 시스템 개발(II) - 스테레오 영상을 이용한 체위 분석 -)

  • 이종환
    • Journal of Biosystems Engineering
    • /
    • v.28 no.1
    • /
    • pp.65-76
    • /
    • 2003
  • The analysis of cow body parameters is important to provide some useful information fur cow management and cow evaluation. Present methods give many stresses to cows because they are invasive and constrain cow postures during measurement of body parameters. This study was conducted to develop the stereo vision system fur non-invasive analysis of cow body features. Body feature parameters of 16 heads at two farms(A, B) were measured using scales and nineteen stereo images of them with walking postures were captured under outdoor illumination. In this study, the camera calibration and inverse perspective transformation technique was established fer the stereo vision system. Two calibration results were presented for farm A and fm B, respectively because setup distances from camera to cow were 510 cm at farm A and 630cm at farm B. Calibration error values fer the stereo vision system were within 2 cm for farm A and less than 4.9 cm for farm B. Eleven feature points of cow body were extracted on stereo images interactively and five assistant points were determined by computer program. 3D world coordinates for these 15 points were calculated by computer program and also used for calculation of cow body parameters such as withers height. pelvic arch height. body length. slope body length. chest depth and chest width. Measured errors for body parameters were less than 10% for most cows. For a few cow. measured errors for slope body length and chest width were more than 10% due to searching errors fer their feature points at inside-body positions. Equation for chest girth estimated by chest depth and chest width was presented. Maximum of estimated error fur chest girth was within 10% of real values and mean value of estimated error was 8.2cm. The analysis of cow body parameters using stereo vision system were successful although body shape on the binocular stereo image was distorted due to cow movements.

3D Facial Landmark Tracking and Facial Expression Recognition

  • Medioni, Gerard;Choi, Jongmoo;Labeau, Matthieu;Leksut, Jatuporn Toy;Meng, Lingchao
    • Journal of information and communication convergence engineering
    • /
    • v.11 no.3
    • /
    • pp.207-215
    • /
    • 2013
  • In this paper, we address the challenging computer vision problem of obtaining a reliable facial expression analysis from a naturally interacting person. We propose a system that combines a 3D generic face model, 3D head tracking, and 2D tracker to track facial landmarks and recognize expressions. First, we extract facial landmarks from a neutral frontal face, and then we deform a 3D generic face to fit the input face. Next, we use our real-time 3D head tracking module to track a person's head in 3D and predict facial landmark positions in 2D using the projection from the updated 3D face model. Finally, we use tracked 2D landmarks to update the 3D landmarks. This integrated tracking loop enables efficient tracking of the non-rigid parts of a face in the presence of large 3D head motion. We conducted experiments for facial expression recognition using both framebased and sequence-based approaches. Our method provides a 75.9% recognition rate in 8 subjects with 7 key expressions. Our approach provides a considerable step forward toward new applications including human-computer interactions, behavioral science, robotics, and game applications.

A study on the development of gas measurement system in shoes mold and automatic gas-vent exchange machine with computer vision (신발금형의 가스 배출량 측정 장치와 영상정보를 이용한 가스벤트 자동 교환 시스템의 개발)

  • Kwon, Jang-Woo;Hong, Jun-Eui;Yoon, Dong-Eop;Choi, Heung-Ho;Kil, Gyung-Suk
    • Journal of Sensor Science and Technology
    • /
    • v.15 no.1
    • /
    • pp.20-27
    • /
    • 2006
  • This paper presents a gas measurement system for deciding hole positions on a PU middle-sole mold from computed gas amount. The optimal number of holes and their positions on the shoe mold are decided from statistical experiment results to overcome the problem of excessive expenses in gas vent exchange. This paper also describes a gas vent exchange mechanism using computer vision system. The gas hole detecting process is based on computer vision algorithms represented as a simple Pattern Matching. The experimental result showed us that the system was useful to calculate the number of holes and their positions on the shoes mold.

Loosely-Coupled Vision/INS Integrated Navigation System

  • Kim, Youngsun;Hwang, Dong-Hwan
    • Journal of Positioning, Navigation, and Timing
    • /
    • v.6 no.2
    • /
    • pp.59-70
    • /
    • 2017
  • Since GPS signals are vulnerable to interference and obstruction, many alternate aiding systems have been proposed to integrate with an inertial navigation system. Among these alternate systems, the vision-aided method has become more attractive due to its benefits in weight, cost and power consumption. This paper proposes a loosely-coupled vision/INS integrated navigation method which can work in GPS-denied environments. The proposed method improves the navigation accuracy by correcting INS navigation and sensor errors using position and attitude outputs of a landmark based vision navigation system. Furthermore, it has advantage to provide redundant navigation output regardless of INS output. Computer simulations and the van tests have been carried out in order to show validity of the proposed method. The results show that the proposed method works well and gives reliable navigation outputs with better performance.

Vision-based Authentication and Registration of Facial Identity in Hospital Information System

  • Bae, Seok-Chan;Lee, Yon-Sik;Choi, Sun-Woong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.12
    • /
    • pp.59-65
    • /
    • 2019
  • Hospital Information System includes a wide range of information in the medical profession, from the overall administrative work of the hospital to the medical work of doctors. In this paper, we proposed a Vision-based Authentication and Registration of Facial Identity in Hospital Information System using OpenCV. By using the proposed security module program a Vision-based Authentication and Registration of Facial Identity, the hospital information system was designed to enhance the security through registration of the face in the hospital personnel and to process the receipt, treatment, and prescription process without any secondary leakage of personal information. The implemented security module program eliminates the need for printing, exposing and recognizing the existing sticker paper tags and wristband type personal information that can be checked by the nurse in the hospital information system. In contrast to the original, the security module program is inputted with ID and password instead to improve privacy and recognition rate.

Volume Control using Gesture Recognition System

  • Shreyansh Gupta;Samyak Barnwal
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.161-170
    • /
    • 2024
  • With the technological advances, the humans have made so much progress in the ease of living and now incorporating the use of sight, motion, sound, speech etc. for various application and software controls. In this paper, we have explored the project in which gestures plays a very significant role in the project. The topic of gesture control which has been researched a lot and is just getting evolved every day. We see the usage of computer vision in this project. The main objective that we achieved in this project is controlling the computer settings with hand gestures using computer vision. In this project we are creating a module which acts a volume controlling program in which we use hand gestures to control the computer system volume. We have included the use of OpenCV. This module is used in the implementation of hand gestures in computer controls. The module in execution uses the web camera of the computer to record the images or videos and then processes them to find the needed information and then based on the input, performs the action on the volume settings if that computer. The program has the functionality of increasing and decreasing the volume of the computer. The setup needed for the program execution is a web camera to record the input images and videos which will be given by the user. The program will perform gesture recognition with the help of OpenCV and python and its libraries and them it will recognize or identify the specified human gestures and use them to perform or carry out the changes in the device setting. The objective is to adjust the volume of a computer device without the need for physical interaction using a mouse or keyboard. OpenCV, a widely utilized tool for image processing and computer vision applications in this domain, enjoys extensive popularity. The OpenCV community consists of over 47,000 individuals, and as of a survey conducted in 2020, the estimated number of downloads exceeds 18 million.

Servo control of mobile robot using vision system (비젼시스템을 이용한 이동로봇의 서보제어)

  • 백승민;국태용
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.540-543
    • /
    • 1997
  • In this paper, a precise trajectory tracking method for mobile robot using a vision system is presented. In solving the problem of precise trajectory tracking, a hierarchical control structure is used which is composed of the path planer, vision system, and dynamic controller. When designing the dynamic controller, non-ideal conditions such as parameter variation, frictional force, and external disturbance are considered. The proposed controller can learn bounded control input for repetitive or periodic dynamics compensation which provides robust and adaptive learning capability. Moreover, the usage of vision system makes mobile robot compensate the cumulative location error which exists when relative sensor like encoder is used to locate the position of mobile robot. The effectiveness of the proposed control scheme is shown through computer simulation.

  • PDF

A Three-Degree-of-Freedom Anthropomorphic Oculomotor Simulator

  • Bang Young-Bong;Paik Jamie K.;Shin Bu-Hyun;Lee Choong-Kil
    • International Journal of Control, Automation, and Systems
    • /
    • v.4 no.2
    • /
    • pp.227-235
    • /
    • 2006
  • For a sophisticated humanoid that explores and learns its environment and interacts with humans, anthropomorphic physical behavior is much desired. The human vision system orients each eye with three-degree-of-freedom (3-DOF) in the directions of horizontal, vertical and torsional axes. Thus, in order to accurately replicate human vision system, it is imperative to have a simulator with 3-DOF end-effector. We present a 3-DOF anthropomorphic oculomotor system that reproduces realistic human eye movements for human-sized humanoid applications. The parallel link architecture of the oculomotor system is sized and designed to match the performance capabilities of the human vision. In this paper, a biologically-inspired mechanical design and the structural kinematics of the prototype are described in detail. The motility of the prototype in each axis of rotation was replicated through computer simulation, while performance tests comparable to human eye movements were recorded.

A Decision Tree based Real-time Hand Gesture Recognition Method using Kinect

  • Chang, Guochao;Park, Jaewan;Oh, Chimin;Lee, Chilwoo
    • Journal of Korea Multimedia Society
    • /
    • v.16 no.12
    • /
    • pp.1393-1402
    • /
    • 2013
  • Hand gesture is one of the most popular communication methods in everyday life. In human-computer interaction applications, hand gesture recognition provides a natural way of communication between humans and computers. There are mainly two methods of hand gesture recognition: glove-based method and vision-based method. In this paper, we propose a vision-based hand gesture recognition method using Kinect. By using the depth information is efficient and robust to achieve the hand detection process. The finger labeling makes the system achieve pose classification according to the finger name and the relationship between each fingers. It also make the classification more effective and accutate. Two kinds of gesture sets can be recognized by our system. According to the experiment, the average accuracy of American Sign Language(ASL) number gesture set is 94.33%, and that of general gestures set is 95.01%. Since our system runs in real-time and has a high recognition rate, we can embed it into various applications.

A Study on the Web Building Assistant System Using GUI Object Detection and Large Language Model (웹 구축 보조 시스템에 대한 GUI 객체 감지 및 대규모 언어 모델 활용 연구)

  • Hyun-Cheol Jang;Hyungkuk Jang
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.830-833
    • /
    • 2024
  • As Large Language Models (LLM) like OpenAI's ChatGPT[1] continue to grow in popularity, new applications and services are expected to emerge. This paper introduces an experimental study on a smart web-builder application assistance system that combines Computer Vision with GUI object recognition and the ChatGPT (LLM). First of all, the research strategy employed computer vision technology in conjunction with Microsoft's "ChatGPT for Robotics: Design Principles and Model Abilities"[2] design strategy. Additionally, this research explores the capabilities of Large Language Model like ChatGPT in various application design tasks, specifically in assisting with web-builder tasks. The study examines the ability of ChatGPT to synthesize code through both directed prompts and free-form conversation strategies. The researchers also explored ChatGPT's ability to perform various tasks within the builder domain, including functions and closure loop inferences, basic logical and mathematical reasoning. Overall, this research proposes an efficient way to perform various application system tasks by combining natural language commands with computer vision technology and LLM (ChatGPT). This approach allows for user interaction through natural language commands while building applications.