• Title/Summary/Keyword: vision recognition

Search Result 1,048, Processing Time 0.034 seconds

Information Fusion of Cameras and Laser Radars for Perception Systems of Autonomous Vehicles (영상 및 레이저레이더 정보융합을 통한 자율주행자동차의 주행환경인식 및 추적방법)

  • Lee, Minchae;Han, Jaehyun;Jang, Chulhoon;Sunwoo, Myoungho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.1
    • /
    • pp.35-45
    • /
    • 2013
  • A autonomous vehicle requires improved and robust perception systems than conventional perception systems of intelligent vehicles. In particular, single sensor based perception systems have been widely studied by using cameras and laser radar sensors which are the most representative sensors for perception by providing object information such as distance information and object features. The distance information of the laser radar sensor is used for road environment perception of road structures, vehicles, and pedestrians. The image information of the camera is used for visual recognition such as lanes, crosswalks, and traffic signs. However, single sensor based perception systems suffer from false positives and true negatives which are caused by sensor limitations and road environments. Accordingly, information fusion systems are essentially required to ensure the robustness and stability of perception systems in harsh environments. This paper describes a perception system for autonomous vehicles, which performs information fusion to recognize road environments. Particularly, vision and laser radar sensors are fused together to detect lanes, crosswalks, and obstacles. The proposed perception system was validated on various roads and environmental conditions with an autonomous vehicle.

An Extraction Method of Number Plates for Various Vehicles Using Digital Signal Analysis Processing Techniques (디지털 신호 분석 기법을 이용한 다양한 번호판 추출 방법)

  • Yang, Sun-Ok;Jun, Young-Min;Jung, Ji-Sang;Ryu, Sang-Hwan
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.3
    • /
    • pp.12-19
    • /
    • 2008
  • Detection of a number plate consists of three stages; division of a number plate, extraction of each character from the plate, recognition of the characters. Among of these three states, division stage of a number plate is the most important part and also the most time-consuming state. This paper suggests an effective region extraction method of a number plate for various images obtained from unmanned inspection systems of illegal parking violation, especially when we have to consider the diverse surrounding environments of roads. Our approaching method detects each region by investigating the characteristics in changes of brightness and intensity between the background part and character part, and the characteristics on character parts such as the sizes, heights, widths, and distance in between two characters. The method also divides a number plate into different types of the plate. This research can solve the number plate region detection failure problems caused by plate edge damages not only for Korean domestic number plates but also for new European style number plates. The method also reduces the time consumption by processing the detection in real-time, therefore, it can be used as a practical solution.

A study on the needs of dental hygiene students in a region for the credit bank system for a bachelor's degree (일부지역 치위생과 학생들의 학사학위 취득을 위한 학점은행제 요구도 조사)

  • Kim, Mi-Jeong;Lee, Hye-Kyung
    • Journal of Korean society of Dental Hygiene
    • /
    • v.9 no.2
    • /
    • pp.179-191
    • /
    • 2009
  • The purpose of this study was to examine the needs of dental hygiene students at a lifelong education center of a three-year-course college for a credit bank system. The subjects in this study were 200 dental hygiene students at a college located in J, which offered courses of a credit bank system. A survey was conducted from May 19 to 23, 2008, to gather data on the acquisition of a bachelor's degree and the credit bank system, and the answer sheets from 184 respondents(92%) were collected. After the collected data were analyzed with SPSS/WIN 12.0 program, the following findings were given: 1. In regard to an intention of taking a bachelor's degree, the largest number of the students investigated(74.5%) intended to do that if they had any chance. As for the reason, 55.6% wanted to get the degree in pursuit of their own personal development. 2. Concerning how to win a bachelor's degree, the largest group that accounted for 63.0% preferred the credit bank systems of college lifelong education centers. 41.8% got interested in the credit bank system mainly because their acquaintances informed them of it. It shows that colleges should reinforce publicity activities if they want to offer the credit bank system. 3. The quality of educational programs and cost had an impact on the choice of an educational institution when they planned to get a bachelor's degree from the credit bank system. Therefore excellent educational programs should be provided, and the government should provide learners with economic help and fund educational institutions. 4. As to comparison of a regular college and the credit bank system as a way to get a bachelor's degree, that was considered to be helpful for finding a job(a mean of 3.39) and for the development of sociability(3.22). That was also deemed to be of use for the improvement of practical job performance, to win public recognition and to be helpful for being well-cultivated, though not many students had those opinions. They took a different view according to academic year(p<.05). 5. Regarding the expected effects of getting a degree from the credit bank system, the greatest group expected it to step up their personal development(3.85). The second largest group expected it to boost job efficacy(3.30), and the students whose academic year was higher had a better opinion. 6. As for future directions for the credit bank system, the largest group put emphasis on the improvement of social perception through intensive P.R. and the enhancement of the qualifications of professors and lecturers(4.02). These opinions were more stressed by the juniors than by the sophomores and seniors, and academic year made a significant difference to their views(p<.05).

  • PDF

Identification of Japanese Black Cattle by the Faces for Precision Livestock Farming (흑소의 얼굴을 이용한 개체인식)

  • 김현태;지전선랑;서률귀구;이인복
    • Journal of Biosystems Engineering
    • /
    • v.29 no.4
    • /
    • pp.341-346
    • /
    • 2004
  • Recent livestock people concern not only increase of production, but also superior quality of animal-breeding environment. So far, the optimization of the breeding and air environment has been focused on the production increase. In the very near future, the optimization will be emphasized on the environment for the animal welfare and health. Especially, cattle farming demands the precision livestock farming and special attention has to be given to the management of feeding, animal health and fertility. The management of individual animal is the first step for precision livestock farming and animal welfare, and recognizing each individual is important for that. Though electronic identification of a cattle such as RFID(Radio Frequency Identification) has many advantages, RFID implementations practically involve several problems such as the reading speed and distance. In that sense, computer vision might be more effective than RFID for the identification of an individual animal. The researches on the identification of cattle via image processing were mostly performed with the cows having black-white patterns of the Holstein. But, the native Korean and Japanese cattle do not have any definite pattern on the body. The purpose of this research is to identify the Japanese black cattle that does not have a body pattern using computer vision technology and neural network algorithm. Twelve heads of Japanese black cattle have been tested to verify the proposed scheme. The values of input parameters were specified and then computed using the face images of cattle. The images of cattle faces were trained using associate neural network algorithm, and the algorithm was verified by the face images that were transformed using brightness, distortion, and noise factors. As a result, there was difference due to transform ratio of the brightness, distortion, and noise. And, the proposed algorithm could identify 100% in the range from -3 to +3 degrees of the brightness, from -2 to +4 degrees of the distortion, and from 0% to 60% of the noise transformed images. It is concluded that our system can not be applied in real time recognition of the moving cows, but can be used for the cattle being at a standstill.

A Study on Optical Condition and preprocessing for Input Image Improvement of Dented and Raised Characters of Rubber Tires (고무타이어 문자열 입력영상 개선을 위한 전처리와 광학조건에 관한 연구)

  • 류한성;최중경;권정혁;구본민;박무열
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.6 no.1
    • /
    • pp.124-132
    • /
    • 2002
  • In this paper, we present a vision algorithm and method for input image improvement and preprocessing of dented and raised characters on the sidewall of tires. we define optical condition between reflect coefficient and reflectance by the physical vector calculate. On the contrary this work will recognize the engraved characters using the computer vision technique. Tire input images have all most same grey levels between the characters and backgrounds. The reflectance is little from a tire surface. therefore, it's very difficult segment the characters from the background. Moreover, one side of the character string is raised and the other is dented. So, the captured images are varied with the angle of camera and illumination. For optimum Input images, the angle between camera and illumination was found out to be with in 90$^{\circ}$. In addition, We used complex filtering with low-pass and high-pass band filters to improve input images, for clear input images. Finally we define equation reflect coefficient and reflectance. By doing this, we obtained good images of tires for pattern recognition.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • v.63 no.2
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.

Person Identification based on Clothing Feature (의상 특징 기반의 동일인 식별)

  • Choi, Yoo-Joo;Park, Sun-Mi;Cho, We-Duke;Kim, Ku-Jin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.16 no.1
    • /
    • pp.1-7
    • /
    • 2010
  • With the widespread use of vision-based surveillance systems, the capability for person identification is now an essential component. However, the CCTV cameras used in surveillance systems tend to produce relatively low-resolution images, making it difficult to use face recognition techniques for person identification. Therefore, an algorithm is proposed for person identification in CCTV camera images based on the clothing. Whenever a person is authenticated at the main entrance of a building, the clothing feature of that person is extracted and added to the database. Using a given image, the clothing area is detected using background subtraction and skin color detection techniques. The clothing feature vector is then composed of textural and color features of the clothing region, where the textural feature is extracted based on a local edge histogram, while the color feature is extracted using octree-based quantization of a color map. When given a query image, the person can then be identified by finding the most similar clothing feature from the database, where the Euclidean distance is used as the similarity measure. Experimental results show an 80% success rate for person identification with the proposed algorithm, and only a 43% success rate when using face recognition.

Rice Yield Estimation of South Korea from Year 2003-2016 Using Stacked Sparse AutoEncoder (SSAE 알고리즘을 통한 2003-2016년 남한 전역 쌀 생산량 추정)

  • Ma, Jong Won;Lee, Kyungdo;Choi, Ki-Young;Heo, Joon
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.5_2
    • /
    • pp.631-640
    • /
    • 2017
  • The estimation of rice yield affects the income of farmers as well as the fields related to agriculture. Moreover, it has an important effect on the government's policy making including the control of supply demand and the price estimation. Thus, it is necessary to build the crop yield estimation model and from the past, many studies utilizing empirical statistical models or artificial neural network algorithms have been conducted through climatic and satellite data. Presently, scientists have achieved successful results with deep learning algorithms in the field of pattern recognition, computer vision, speech recognition, etc. Among deep learning algorithms, the SSAE (Stacked Sparse AutoEncoder) algorithm has been confirmed to be applicable in the field of forecasting through time series data and in this study, SSAE was utilized to estimate the rice yield in South Korea. The climatic and satellite data were used as the input variables and different types of input data were constructed according to the period of rice growth in South Korea. As a result, the combination of the satellite data from May to September and the climatic data using the 16 day average value showed the best performance with showing average annual %RMSE (percent Root Mean Square Error) and region %RMSE of 7.43% and 7.16% that the applicability of the SSAE algorithm could be proved in the field of rice yield estimation.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Fast and Efficient Implementation of Neural Networks using CUDA and OpenMP (CUDA와 OPenMP를 이용한 빠르고 효율적인 신경망 구현)

  • Park, An-Jin;Jang, Hong-Hoon;Jung, Kee-Chul
    • Journal of KIISE:Software and Applications
    • /
    • v.36 no.4
    • /
    • pp.253-260
    • /
    • 2009
  • Many algorithms for computer vision and pattern recognition have recently been implemented on GPU (graphic processing unit) for faster computational times. However, the implementation has two problems. First, the programmer should master the fundamentals of the graphics shading languages that require the prior knowledge on computer graphics. Second, in a job that needs much cooperation between CPU and GPU, which is usual in image processing and pattern recognition contrary to the graphic area, CPU should generate raw feature data for GPU processing as much as possible to effectively utilize GPU performance. This paper proposes more quick and efficient implementation of neural networks on both GPU and multi-core CPU. We use CUDA (compute unified device architecture) that can be easily programmed due to its simple C language-like style instead of GPU to solve the first problem. Moreover, OpenMP (Open Multi-Processing) is used to concurrently process multiple data with single instruction on multi-core CPU, which results in effectively utilizing the memories of GPU. In the experiments, we implemented neural networks-based text extraction system using the proposed architecture, and the computational times showed about 15 times faster than implementation on only GPU without OpenMP.