• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.039 seconds

A Beverage Can Recognition System Based on Deep Learning for the Visually Impaired (시각장애인을 위한 딥러닝 기반 음료수 캔 인식 시스템)

  • Lee Chanbee;Sim Suhyun;Kim Sunhee
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.19 no.1
    • /
    • pp.119-127
    • /
    • 2023
  • Recently, deep learning has been used in the development of various institutional devices and services to help the visually impaired people in their daily lives. This is because not only are there few products and facility guides written in braille, but less than 10% of the visually impaired can use braille. In this paper, we propose a system that recognizes beverage cans in real time and outputs the beverage can name with sound for the convenience of the visually impaired. Five commercially available beverage cans were selected, and a CNN model and a YOLO model were designed to recognize the beverage cans. After augmenting the image data, model training was performed. The accuracy of the proposed CNN model and YOLO model is 91.2% and 90.8%, respectively. For practical verification, a system was built by attaching a camera and speaker to a Raspberry Pi. In the system, the YOLO model was applied. It was confirmed that beverage cans were recognized and output as sound in real time in various environments.

Mineral Image Analysis Technique (광물이미지 분석 기법)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.353-354
    • /
    • 2021
  • In this study, in order to overcome the limitations of the particle size analysis method using a scanner, a microscope, or a laser, and to reduce the cost, a high-quality sampling of micro minerals is performed using an ultra-high-pixel DSLR camera and a MACRO lens. Using this, digital photos taken of standard mineral particles are analyzed to distinguish the size and shape of mineral particles at the level of grain of sand (a few mm ~ 0.063 mm). In addition, various photographing techniques for the production of three-dimensional images of mineral particles were sought, and an attempt was made to produce learning materials and images for mineral classification.

  • PDF

Real-time Abnormal Behavior Analysis System Based on Pedestrian Detection and Tracking (보행자의 검출 및 추적을 기반으로 한 실시간 이상행위 분석 시스템)

  • Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.25-27
    • /
    • 2021
  • With the recent development of deep learning technology, computer vision-based AI technologies have been studied to analyze the abnormal behavior of objects in image information acquired through CCTV cameras. There are many cases where surveillance cameras are installed in dangerous areas or security areas for crime prevention and surveillance. For this reason, companies are conducting studies to determine major situations such as intrusion, roaming, falls, and assault in the surveillance camera environment. In this paper, we propose a real-time abnormal behavior analysis algorithm using object detection and tracking method.

  • PDF

Prediction of the Vase Life of Cut Lily Flowers Using Thermography

  • Lee, Ja Hee;Choi, So Young;Park, Hye Min;Oh, Sang Im;Lee, Ae Kyung
    • Journal of People, Plants, and Environment
    • /
    • v.22 no.3
    • /
    • pp.233-239
    • /
    • 2019
  • This study was conducted in order to predict the vase life of cut lily 'Woori Tower' flowers using a non-destructive thermal imaging technique. It was found that the temperature of cut lily flowers was maintained at 20℃ and was slightly lower than the air temperature until they bloomed. On the 11th day, when flowers bloomed, the temperature of leaves and flowers was measured to be 18.75±0.38℃ and 19.23±0.32℃ respectively, and their difference with ambient temperature was over 3℃. The flower temperature increased slightly when the vase life of cut lily flowers ended, and the temperature difference between the air and leaf temperature (1.77℃) and between the air and flower temperature (1.39℃) got smaller. No visible aging symptom was observed, but it was found that the temperature had risen due to water losses and less functional stomata. The vase life of cut lily flowers can be predicted based on changes in temperature and it will be also possible to predict the potential quality and vase life of cut flowers before harvesting them in greenhouses.

Object Detection Based on Virtual Humans Learning (가상 휴먼 학습 기반 영상 객체 검출 기법)

  • Lee, JongMin;Jo, Dongsik
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.376-378
    • /
    • 2022
  • Artificial intelligence technology is widely used in various fields such as artificial intelligence speakers, artificial intelligence chatbots, and autonomous vehicles. Among these AI application fields, the image processing field shows various uses such as detecting objects or recognizing objects using artificial intelligence. In this paper, data synthesized by a virtual human is used as a method to analyze images taken in a specific space.

  • PDF

Dog Activities Recognition System using Dog-centered Cropped Images (반려견에 초점을 맞춰 추출하는 영상 기반의 행동 탐지 시스템)

  • Othmane Atif;Jonguk Lee;Daihee Park;Yongwha Chung
    • Annual Conference of KIPS
    • /
    • 2023.05a
    • /
    • pp.615-617
    • /
    • 2023
  • In recent years, the growing popularity of dogs due to the benefits they bring their owners has contributed to the increase of the number of dogs raised. For owners, it is their responsibility to ensure their dogs' health and safety. However, it is challenging for them to continuously monitor their dogs' activities, which are important to understand and guarantee their wellbeing. In this work, we introduce a camera-based monitoring system to help owners automatically monitor their dogs' activities. The system receives sequences of RGB images and uses YOLOv7 to detect the dog presence, and then applies post-processing to perform dog-centered image cropping on each input sequence. The optical flow is extracted from each sequence, and both sequences of RGB and flow are input to a two-stream EfficientNet to extract their respective features. Finally, the features are concatenated, and a bi-directional LSTM is utilized to retrieve temporal features and recognize the activity. The experiments prove that our system achieves a good performance with the F-1 score exceeding 0.90 for all activities and reaching 0.963 on average.

Preliminary Study for Image-Based Measurement Model in a Construction Site (이미지 기반 건설현장 수치 측정 모델 기초연구)

  • Yoon, Sebeen;Kang, Mingyun;Kim, Chang-Won;Lim, Hyunsu;Yoo, Wi Sung;Kim, Taehoon
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.287-288
    • /
    • 2023
  • The inspection work at construction sites is one of the important supervisory tasks, which involves verifying that the building is being constructed by the numerical values specified in the design drawings. The conventional measuring method for inspection involves using tools or equipment such as rulers directly by the personnel at the site, and it is usually confirmed by vision. Therefore, this study proposes an model to measure numerical values on images of the construction site. Through the case study to measure the installation interval of jack supports, the proposed algorithm was verified the effiect and validity. The results of this study suggest that it can support inspection work even in the office, which may have been overlooked by on-site inspectors, and contribute to the digitization of inspection work at construction sites.

  • PDF

The Bullet Launcher with A Pneumatic System to Detect Objects by Unique Markers

  • Jasmine Aulia;Zahrah Radila;Zaenal Afif Azhary;Aulia M. T. Nasution;Detak Yan Pratama;Katherin Indriawati;Iyon Titok Sugiarto;Wildan Panji Tresna
    • Journal of information and communication convergence engineering
    • /
    • v.21 no.3
    • /
    • pp.252-260
    • /
    • 2023
  • A bullet launcher can be developed as a smart instrument, especially for use in the military section, that can track, identify, detect, mark, lock, and shoot a target by implementing an image-processing system. In this research, the application of object recognition system, laser encoding as a unique marker, 2-dimensional movement, and pneumatic as a shooter has been studied intensively. The results showed that object recognition system could detect various colors, patterns, sizes, and laser blinking. Measuring the average error value of the object distance by using the camera is ±4, ±5, and ±6% for circle, square and triangle form respectively. Meanwhile, the average accuracy of shots on objects is 95.24% and 85.71% in indoor and outdoor conditions respectively. Here, the average prototype response time is 1.11 s. Moreover, the highest accuracy rate of shooting results at 50 cm was obtained 98.32%.

Classification of Objects using CNN-Based Vision and Lidar Fusion in Autonomous Vehicle Environment

  • G.komali ;A.Sri Nagesh
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.67-72
    • /
    • 2023
  • In the past decade, Autonomous Vehicle Systems (AVS) have advanced at an exponential rate, particularly due to improvements in artificial intelligence, which have had a significant impact on social as well as road safety and the future of transportation systems. The fusion of light detection and ranging (LiDAR) and camera data in real-time is known to be a crucial process in many applications, such as in autonomous driving, industrial automation and robotics. Especially in the case of autonomous vehicles, the efficient fusion of data from these two types of sensors is important to enabling the depth of objects as well as the classification of objects at short and long distances. This paper presents classification of objects using CNN based vision and Light Detection and Ranging (LIDAR) fusion in autonomous vehicles in the environment. This method is based on convolutional neural network (CNN) and image up sampling theory. By creating a point cloud of LIDAR data up sampling and converting into pixel-level depth information, depth information is connected with Red Green Blue data and fed into a deep CNN. The proposed method can obtain informative feature representation for object classification in autonomous vehicle environment using the integrated vision and LIDAR data. This method is adopted to guarantee both object classification accuracy and minimal loss. Experimental results show the effectiveness and efficiency of presented approach for objects classification.

CNN-LSTM based Autonomous Driving Technology (CNN-LSTM 기반의 자율주행 기술)

  • Ga-Eun Park;Chi Un Hwang;Lim Se Ryung;Han Seung Jang
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.6
    • /
    • pp.1259-1268
    • /
    • 2023
  • This study proposes a throttle and steering control technology using visual sensors based on deep learning's convolutional and recurrent neural networks. It collects camera image and control value data while driving a training track in clockwise and counterclockwise directions, and generates a model to predict throttle and steering through data sampling and preprocessing for efficient learning. Afterward, the model was validated on a test track in a different environment that was not used for training to find the optimal model and compare it with a CNN (Convolutional Neural Network). As a result, we found that the proposed deep learning model has excellent performance.