• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.031 seconds

An Image Processing System for the Harvesting robot$^{1)}$ (포도수확용 로봇 개발을 위한 영상처리시스템)

  • Lee, Dae-Weon;Kim, Dong-Woo;Kim, Hyun-Tae;Lee, Yong-Kuk;Si-Heung
    • Journal of Bio-Environment Control
    • /
    • v.10 no.3
    • /
    • pp.172-180
    • /
    • 2001
  • A grape fruit is required for a lot of labor to harvest in time in Korea, since the fruit is cut and grabbed currently by hand. In foreign country, especially France, a grape harvester has been developed for processing to make wine out of a grape, not to eat a fresh grape fruit. However, a harvester which harvests to eat a fresh grape fruit has not been developed yet. Therefore, this study was designed and constructed to develope a image processing system for a fresh grape harvester. Its development involved the integration of a vision system along with an personal computer and two cameras. Grape recognition, which was able to found the accurate cutting position in three dimension by the end-effector, needed to find out the object from the background by using two different images from two cameras. Based on the results of this research the following conclusions were made: The model grape was located and measured within less than 1,100 mm from camera center, which means center between two cameras. The distance error of the calculated distance had the distance error within 5mm by using model image in the laboratory. The image processing system proved to be a reliable system for measuring the accurate distance between the camera center and the grape fruit. Also, difference between actual distance and calculated distance was found within 5 mm using stereo vision system in the field. Therefore, the image processing system would be mounted on a grape harvester to be founded to the position of the a grape fruit.

  • PDF

YOLO-based Traffic Signal Detection for Identifying the Violation of Motorbike Riders (YOLO 기반의 교통 신호등 인식을 통한 오토바이 운전자의 신호 위반 여부 확인)

  • Wahyutama, Aria Bisma;Hwang, Mintae
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.141-143
    • /
    • 2022
  • This paper presented a new technology to identify traffic violations of motorbike riders by detecting the traffic signal using You Only Look Once (YOLO) object detection. The hardware module that is mounted on the front of the motorbike consists of Raspberry Pi with a camera to run the YOLO object detection, a GPS module to acquire the motorcycle's coordinate, and a LoRa communication module to send the data to a cloud DB. The main goal of the software is to determine whether a motorbike has violated a traffic signal. This paper proposes a function to recognize the red traffic signal colour with its movement inside the camera angle and determine that the traffic signal violation happens if the traffic signal is moving to the right direction (the rider turns left) or moving to the top direction (the riders goes straight). Furthermore, if a motorbike rider is violated the signal, the rider's personal information (name, mobile phone number, etc), the snapshot of the violation situation, rider's location, and date/time will be sent to a cloud DB. The violation information will be delivered to the driver's smartphone as a push notification and the local police station to be used for issuing violation tickets, which is expected to prevent motorbike riders from violating traffic signals.

  • PDF

A Study on the Measurement of Respiratory Rate Using Image Alignment and Statistical Pattern Classification (영상 정합 및 통계학적 패턴 분류를 이용한 호흡률 측정에 관한 연구)

  • Moon, Sujin;Lee, Eui Chul
    • Asia-pacific Journal of Multimedia Services Convergent with Art, Humanities, and Sociology
    • /
    • v.8 no.10
    • /
    • pp.63-70
    • /
    • 2018
  • Biomedical signal measurement technology using images has been developed, and researches on respiration signal measurement technology for maintaining life have been continuously carried out. The existing technology measured respiratory signals through a thermal imaging camera that measures heat emitted from a person's body. In addition, research was conducted to measure respiration rate by analyzing human chest movement in real time. However, the image processing using the infrared thermal image may be difficult to detect the respiratory organ due to the external environmental factors (temperature change, noise, etc.), and thus the accuracy of the measurement of the respiration rate is low.In this study, the images were acquired using visible light and infrared thermal camera to enhance the area of the respiratory tract. Then, based on the two images, features of the respiratory tract region are extracted through processes such as face recognition and image matching. The pattern of the respiratory signal is classified through the k-nearest neighbor classifier, which is one of the statistical classification methods. The respiration rate was calculated according to the characteristics of the classified patterns and the possibility of breathing rate measurement was verified by analyzing the measured respiration rate with the actual respiration rate.

The User Identification System using the ubiFloor (유비플로어를 이용한 사용자 인증 시스템)

  • Lee Seunghun;Yun Jaeseok;Ryu Jeha;Woo Woontack
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.4
    • /
    • pp.258-267
    • /
    • 2005
  • We propose the ubiFloor system to track and recognize users in ubiquitous computing environments such as ubiHome. Conventional user identification systems require users to carry tag sensors or use camera-based sensors to be very susceptible to environmental noise. Though floor-type systems may relieve these problems, high cost of load cell and DAQ boards makes the systems expensive. We propose the transparent user identification system, ubiFloor, exploiting user's walking pattern to recognize the user with a set of simple ON/OFF switch sensors. The experimental results show that the proposed system can recognize the 10 enrolled users at the correct recognition rate of $90\%$ without users' awareness of the system.

On Motion Planning for Human-Following of Mobile Robot in a Predictable Intelligent Space

  • Jin, Tae-Seok;Hashimoto, Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.4 no.1
    • /
    • pp.101-110
    • /
    • 2004
  • The robots that will be needed in the near future are human-friendly robots that are able to coexist with humans and support humans effectively. To realize this, humans and robots need to be in close proximity to each other as much as possible. Moreover, it is necessary for their interactions to occur naturally. It is desirable for a robot to carry out human following, as one of the human-affinitive movements. The human-following robot requires several techniques: the recognition of the moving objects, the feature extraction and visual tracking, and the trajectory generation for following a human stably. In this research, a predictable intelligent space is used in order to achieve these goals. An intelligent space is a 3-D environment in which many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents providing humans with services. A mobile robot is controlled to follow a walking human using distributed intelligent sensors as stably and precisely as possible. The moving objects is assumed to be a point-object and projected onto an image plane to form a geometrical constraint equation that provides position data of the object based on the kinematics of the intelligent space. Uncertainties in the position estimation caused by the point-object assumption are compensated using the Kalman filter. To generate the shortest time trajectory to follow the walking human, the linear and angular velocities are estimated and utilized. The computer simulation and experimental results of estimating and following of the walking human with the mobile robot are presented.

Pallet Measurement Method for Automatic Pallet Engaging in Real-Time (자동 화물처리를 위한 실시간 팔레트 측정 방법)

  • Byun, Sung-Min;Kim, Min-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.171-181
    • /
    • 2011
  • A vision-based method for positioning and orienting of pallets is presented in this paper, which guides autonomous forklifts to engage pallets automatically. The method uses a single camera mounted on the fork carriage instead of two cameras for stereo vision that is conventionally used for positioning objects in 3D space. An image back-projection technique for determining the orient of a pallet without any fiducial marks is suggested in tins paper, which projects two feature lines on the front plane of the pallet backward onto a virtual plane that can be rotated around a given axis in 3D space. We show the fact that the rotation angle of the virtual plane on which the back-projected feature lines are parallel can be used to describe the orient of the pallet front plane. The position of the pallet is determined by using ratio of the distance between the back-projected feature lines and their real distance on the pallet front plane. Through a test on real pallet images, we found that the proposed method was applicable to real environment practically in real-time.

Detection of Facial Direction using Facial Features (얼굴 특징 정보를 이용한 얼굴 방향성 검출)

  • Park Ji-Sook;Dong Ji-Youn
    • Journal of Internet Computing and Services
    • /
    • v.4 no.6
    • /
    • pp.57-67
    • /
    • 2003
  • The recent rapid development of multimedia and optical technologies brings great attention to application systems to process facial Image features. The previous research efforts in facial image processing have been mainly focused on the recognition of human face and facial expression analysis, using front face images. Not much research has been carried out Into image-based detection of face direction. Moreover, the existing approaches to detect face direction, which normally use the sequential Images captured by a single camera, have limitations that the frontal image must be given first before any other images. In this paper, we propose a method to detect face direction by using facial features such as facial trapezoid which is defined by two eyes and the lower lip. Specifically, the proposed method forms a facial direction formula, which is defined with statistical data about the ratio of the right and left area in the facial trapezoid, to identify whether the face is directed toward the right or the left. The proposed method can be effectively used for automatic photo arrangement systems that will often need to set the different left or right margin of a photo according to the face direction of a person in the photo.

  • PDF

Depth Image based Chinese Learning Machine System Using Adjusted Chain Code (깊이 영상 기반 적응적 체인 코드를 이용한 한자 학습 시스템)

  • Kim, Kisang;Choi, Hyung-Il
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.12
    • /
    • pp.545-554
    • /
    • 2014
  • In this paper, we propose online Chinese character learning machine with a depth camera, where a system presents a Chinese character on a screen and a user is supposed to draw the presented Chinese character by his or her hand gesture. We develop the hand tracking method and suggest the adjusted chain code to represent constituent strokes of a Chinese character. For hand tracking, a fingertip is detected and verified. The adjusted chain code is designed to contain the information on order and relative length of each constituent stroke as well as the information on the directional variation of sample points. Such information is very efficient for a real-time match process and checking incorrectly drawn parts of a stroke.

Comparative Performance Evaluations of Eye Detection algorithm (눈 검출 알고리즘에 대한 성능 비교 연구)

  • Gwon, Su-Yeong;Cho, Chul-Woo;Lee, Won-Oh;Lee, Hyeon-Chang;Park, Kang-Ryoung;Lee, Hee-Kyung;Cha, Ji-Hun
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.6
    • /
    • pp.722-730
    • /
    • 2012
  • Recently, eye image information has been widely used for iris recognition or gaze detection in biometrics or human computer interaction. According as long distance camera-based system is increasing for user's convenience, the noises such as eyebrow, forehead and skin areas which can degrade the accuracy of eye detection are included in the captured image. And fast processing speed is also required in this system in addition to the high accuracy of eye detection. So, we compared the most widely used algorithms for eye detection such as AdaBoost eye detection algorithm, adaptive template matching+AdaBoost algorithm, CAMShift+AdaBoost algorithm and rapid eye detection method. And these methods were compared with images including light changes, naive eye and the cases wearing contact lens or eyeglasses in terms of accuracy and processing speed.

Obstacle Recognition by 3D Feature Extraction for Mobile Robot Navigation in an Indoor Environment (복도환경에서의 이동로봇 주행을 위한 3차원 특징추출을 통한 장애물 인식)

  • Jin, Tae-Seok
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.14 no.9
    • /
    • pp.1987-1992
    • /
    • 2010
  • This paper deals with the method of using the three dimensional characteristic information to classify the front environment in travelling by using the images captured by a CCD camera equipped on a mobile robot. The images detected by the three dimensional characteristic information is divided into the part of obstacles, the part of corners, and th part of doorways in a corridor. In designing the travelling path of a mobile robot, these three situations are used as an important information in the obstacle avoidance and optimal path computing. So, this paper proposes the method of deciding the travelling direction of a mobile robot with using input images based upon the suggested algorithm by preprocessing, and verified the validity of the image information which are detected as obstacles by the analysis through neural network.