• Title/Summary/Keyword: Camera-based Recognition

Search Result 593, Processing Time 0.027 seconds

Neuro-Net Based Automatic Sorting And Grading of A Mushroom (Lentinus Edodes L)

  • Hwang, H.;Lee, C.H.;Han, J.H.
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 1993.10a
    • /
    • pp.1243-1253
    • /
    • 1993
  • Visual features of a mushroom(Lentinus Edodes L) are critical in sorting and grading as most agricultural products are. Because of its complex and various visual features, grading and sorting of mushrooms have been done manually by the human expert. Though actions involved in human grading looks simple, a decision making undereath the simple action comes form the results of the complex neural processing of the visual image. And processing details involved in the visual recognition of the human brain has not been fully investigated yet. Recently, however, an artificial neural network has drawn a great attention because of its functional capability as a partial substitute of the human brain. Since most agricultural products are not uniquely defined in its physical properties and do not have a well defined job structure, a research of the neuro-net based human like information processing toward the agricultural product and processing are widely open and promising. In this pape , neuro-net based grading and sorting system was developed for a mushroom . A computer vision system was utilized for extracting and quantifying the qualitative visual features of sampled mushrooms. The extracted visual features and their corresponding grades were used as input/output pairs for training the neural network and the trained results of the network were presented . The computer vision system used is composed of the IBM PC compatible 386DX, ITEX PFG frame grabber, B/W CCD camera , VGA color graphic monitor , and image output RGB monitor.

  • PDF

An Automatic Corona-discharge Detection System for Railways Based on Solar-blind Ultraviolet Detection

  • Li, Jiaqi;Zhou, Yue;Yi, Xiangyu;Zhang, Mingchao;Chen, Xue;Cui, Muhan;Yan, Feng
    • Current Optics and Photonics
    • /
    • v.1 no.3
    • /
    • pp.196-202
    • /
    • 2017
  • Corona discharge is always a sign of failure processes of high-voltage electrical apparatus, including those utilized in electric railway systems. Solar-blind ultraviolet (UV) cameras are effective tools for corona inspection. In this work, we present an automatic railway corona-discharge detection system based on solar-blind ultraviolet detection. The UV camera, mounted on top of a train, inspects the electrical apparatus, including transmission lines and insulators, along the railway during fast cruising of the train. An algorithm based on the Hough transform is proposed for distinguishing the emitting objects (corona discharge) from the noise. The detection system can report the suspected corona discharge in real time during fast cruises. An experiment was carried out during a routine inspection of railway apparatus in Xinjiang Province, China. Several corona-discharge points were found along the railway. The false-alarm rate was controlled to less than one time per hour during this inspection.

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;박동희;남기환;한준희;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2003.10a
    • /
    • pp.227-232
    • /
    • 2003
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tracking methodology that works under variable and realistic lighting conditions. eased on combining the bright-pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils are not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tracking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

  • PDF

Extracting the Point of Impact from Simulated Shooting Target based on Image Processing (영상처리 기반 모의 사격 표적지 탄착점 추출)

  • Lee, Tae-Guk;Lim, Chang-Gyoon;Kim, Kang-Chul;Kim, Young-Min
    • Journal of Internet Computing and Services
    • /
    • v.11 no.1
    • /
    • pp.117-128
    • /
    • 2010
  • There are many researches related to a simulated shooting training system for replacing the real military and police shooting training. In this paper, we propose the point of impact from a simulated shooting target based on image processing instead of using a sensor based approach. The point of impact is extracted by analyzing the image extracted from the camera on the muzzle of a gun. The final shooting result is calculated by mapping the target and the coordinates of the point of impact. The recognition system is divided into recognizing the projection zone, extracting the point of impact on the projection zone, and calculating the shooting result from the point of impact. We find the vertices of the projection zone after converting the captured image to the binary image and extract the point of impact in it. We present the extracting process step by step and provide experiments to validate the results. The experiments show that exact vertices of the projection area and the point of impact are found and a conversion result for the final result is shown on the interface.

Unauthorized person tracking system in video using CNN-LSTM based location positioning

  • Park, Chan;Kim, Hyungju;Moon, Nammee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.77-84
    • /
    • 2021
  • In this paper, we propose a system that uses image data and beacon data to classify authorized and unauthorized perosn who are allowed to enter a group facility. The image data collected through the IP camera uses YOLOv4 to extract a person object, and collects beacon signal data (UUID, RSSI) through an application to compose a fingerprinting-based radio map. Beacon extracts user location data after CNN-LSTM-based learning in order to improve location accuracy by supplementing signal instability. As a result of this paper, it showed an accuracy of 93.47%. In the future, it can be expected to fusion with the access authentication process such as QR code that has been used due to the COVID-19, track people who haven't through the authentication process.

The Individual Discrimination Location Tracking Technology for Multimodal Interaction at the Exhibition (전시 공간에서 다중 인터랙션을 위한 개인식별 위치 측위 기술 연구)

  • Jung, Hyun-Chul;Kim, Nam-Jin;Choi, Lee-Kwon
    • Journal of Intelligence and Information Systems
    • /
    • v.18 no.2
    • /
    • pp.19-28
    • /
    • 2012
  • After the internet era, we are moving to the ubiquitous society. Nowadays the people are interested in the multimodal interaction technology, which enables audience to naturally interact with the computing environment at the exhibitions such as gallery, museum, and park. Also, there are other attempts to provide additional service based on the location information of the audience, or to improve and deploy interaction between subjects and audience by analyzing the using pattern of the people. In order to provide multimodal interaction service to the audience at the exhibition, it is important to distinguish the individuals and trace their location and route. For the location tracking on the outside, GPS is widely used nowadays. GPS is able to get the real time location of the subjects moving fast, so this is one of the important technologies in the field requiring location tracking service. However, as GPS uses the location tracking method using satellites, the service cannot be used on the inside, because it cannot catch the satellite signal. For this reason, the studies about inside location tracking are going on using very short range communication service such as ZigBee, UWB, RFID, as well as using mobile communication network and wireless lan service. However these technologies have shortcomings in that the audience needs to use additional sensor device and it becomes difficult and expensive as the density of the target area gets higher. In addition, the usual exhibition environment has many obstacles for the network, which makes the performance of the system to fall. Above all these things, the biggest problem is that the interaction method using the devices based on the old technologies cannot provide natural service to the users. Plus the system uses sensor recognition method, so multiple users should equip the devices. Therefore, there is the limitation in the number of the users that can use the system simultaneously. In order to make up for these shortcomings, in this study we suggest a technology that gets the exact location information of the users through the location mapping technology using Wi-Fi and 3d camera of the smartphones. We applied the signal amplitude of access point using wireless lan, to develop inside location tracking system with lower price. AP is cheaper than other devices used in other tracking techniques, and by installing the software to the user's mobile device it can be directly used as the tracking system device. We used the Microsoft Kinect sensor for the 3D Camera. Kinect is equippedwith the function discriminating the depth and human information inside the shooting area. Therefore it is appropriate to extract user's body, vector, and acceleration information with low price. We confirm the location of the audience using the cell ID obtained from the Wi-Fi signal. By using smartphones as the basic device for the location service, we solve the problems of additional tagging device and provide environment that multiple users can get the interaction service simultaneously. 3d cameras located at each cell areas get the exact location and status information of the users. The 3d cameras are connected to the Camera Client, calculate the mapping information aligned to each cells, get the exact information of the users, and get the status and pattern information of the audience. The location mapping technique of Camera Client decreases the error rate that occurs on the inside location service, increases accuracy of individual discrimination in the area through the individual discrimination based on body information, and establishes the foundation of the multimodal interaction technology at the exhibition. Calculated data and information enables the users to get the appropriate interaction service through the main server.

A Real-time Hand Pose Recognition Method with Hidden Finger Prediction (은닉된 손가락 예측이 가능한 실시간 손 포즈 인식 방법)

  • Na, Min-Young;Choi, Jae-In;Kim, Tae-Young
    • Journal of Korea Game Society
    • /
    • v.12 no.5
    • /
    • pp.79-88
    • /
    • 2012
  • In this paper, we present a real-time hand pose recognition method to provide an intuitive user interface through hand poses or movements without a keyboard and a mouse. For this, the areas of right and left hands are segmented from the depth camera image, and noise removal is performed. Then, the rotation angle and the centroid point of each hand area are calculated. Subsequently, a circle is expanded at regular intervals from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing. Lastly, the matching between the hand information calculated previously and the hand model of previous frame is performed, and the hand model is recognized to update the hand model for the next frame. This method enables users to predict the hidden fingers through the hand model information of the previous frame using temporal coherence in consecutive frames. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 95% and the performance indicated over 32 fps. The proposed method can be used as a contactless input interface in presentation, advertisement, education, and game applications.

Gesture Spotting by Web-Camera in Arbitrary Two Positions and Fuzzy Garbage Model (임의 두 지점의 웹 카메라와 퍼지 가비지 모델을 이용한 사용자의 의미 있는 동작 검출)

  • Yang, Seung-Eun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.2
    • /
    • pp.127-136
    • /
    • 2012
  • Many research of hand gesture recognition based on vision system have been conducted which enable user operate various electronic devices more easily. 3D position calculation and meaningful gesture classification from similar gestures should be executed to recognize hand gesture accurately. A simple and cost effective method of 3D position calculation and gesture spotting (a task to recognize meaningful gesture from other similar meaningless gestures) is described in this paper. 3D position is achieved by calculation of two cameras relative position through pan/tilt module and a marker regardless with the placed position. Fuzzy garbage model is proposed to provide a variable reference value to decide whether the user gesture is the command gesture or not. The reference is achieved from fuzzy command gesture model and fuzzy garbage model which returns the score that shows the degree of belonging to command gesture and garbage gesture respectively. Two-stage user adaptation is proposed that off-line (batch) adaptation for inter-personal difference and on-line (incremental) adaptation for intra-difference to enhance the performance. Experiment is conducted for 5 different users. The recognition rate of command (discriminate command gesture) is more than 95% when only one command like meaningless gesture exists and more than 85% when the command is mixed with many other similar gestures.

Active Water-Level and Distance Measurement Algorithm using Light Beam Pattern (광패턴을 이용한 능동형 수위 및 거리 측정 기법)

  • Kim, Nac-Woo;Son, Seung-Chul;Lee, Mun-Seob;Min, Gi-Hyeon;Lee, Byung-Tak
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.4
    • /
    • pp.156-163
    • /
    • 2015
  • In this paper, we propose an active water level and distance measurement algorithm using a light beam pattern. On behalf of conventional water level gauge types of pressure, float-well, ultrasonic, radar, and others, recently, extensive research for video analysis based water level measurement methods is gradually increasing as an importance of accurate measurement, monitoring convenience, and much more has been emphasized. By turning a reference light beam pattern on bridge or embankment actively, we suggest a new approach that analyzes and processes the projected light beam pattern image obtained from camera device, measures automatically water level and distance between a camera and a bridge or a levee. As contrasted with conventional methods that passively have to analyze captured video information for recognition of a watermark attached on a bridge or specific marker, we actively use the reference light beam pattern suited to the installed bridge environment. So, our method offers a robust water level measurement. The reasons are as follows. At first, our algorithm is effective against unfavorable visual field, pollution or damage of watermark, and so on, and in the next, this is possible to monitor in real-time the portable-based local situation by day and night. Furthermore, our method is not need additional floodlight. Tests are simulated under indoor environment conditions from distance measurement over 0.4-1.4m and height measurement over 13.5-32.5cm.

Algorithm on Detection and Measurement for Proximity Object based on the LiDAR Sensor (LiDAR 센서기반 근접물체 탐지계측 알고리즘)

  • Jeong, Jong-teak;Choi, Jo-cheon
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.3
    • /
    • pp.192-197
    • /
    • 2020
  • Recently, the technologies related to autonomous drive has studying the goal for safe operation and prevent accidents of vehicles. There is radar and camera technologies has used to detect obstacles in these autonomous vehicle research. Now a day, the method for using LiDAR sensor has considering to detect nearby objects and accurately measure the separation distance in the autonomous navigation. It is calculates the distance by recognizing the time differences between the reflected beams and it allows precise distance measurements. But it also has the disadvantage that the recognition rate of object in the atmospheric environment can be reduced. In this paper, point cloud data by triangular functions and Line Regression model are used to implement measurement algorithm, that has improved detecting objects in real time and reduce the error of measuring separation distances based on improved reliability of raw data from LiDAR sensor. It has verified that the range of object detection errors can be improved by using the Python imaging library.