• Title/Summary/Keyword: camera image

Search Result 4,918, Processing Time 0.027 seconds

Incident response system through emergency recognition using heart rate and real-time image sharing (심박수를 이용한 위급상황 인식 및 실시간 영상공유를 통한 사고대처 시스템)

  • Lee, In-kwon;Park, Jung-hoon;Jin, Sorin;Han, Kyung-dong;Hwang, Hoyoung
    • Journal of IKEEE
    • /
    • v.23 no.2
    • /
    • pp.358-363
    • /
    • 2019
  • In this paper, we implemented a welfare system for the elderly living alone, disabled, or babies to provide fast incident response in case of emergency situations. The proposed system can quickly recognize emergency situations using heart rate sensors and real-time image sharing. The sensors attached on a wrist band monitor the heart rate along with relevant bio signals of clients and send alarms to guardians in the emergency situations. At the same time, the real-time image signals are captured using OpenCV and sent to the guardians in order to give the exact information for fast and appropriate response to handle the situation. In the proposed system, the camera works only in the emergency situations so as to provide enough privacy to the client's every day life.

A Study on Improving the Quality of DIBR Intermediate Images Using Meshes (메쉬를 활용한 DIBR 기반 중간 영상 화질 향상 방법 연구)

  • Kim, Jiseong;Kim, Minyoung;Cho, Yongjoo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.822-823
    • /
    • 2014
  • The usual method of generating an image for a multiview display system requires acquiring a color image and depth information of a reference camera. Then, intermediate images, generated using DIBR method, will be captured at a number of different viewpoints and composed to construct an multiview image. When such intermediate views are generated, several holes would be shown because some hidden parts are shown when the screenshot is taken at different angle. Previous research tried to solve this problem by creating a new hole-filling algorithm or enhancing the depth information. This paper describes a new method of enhancing the intermediate view images by applying the Ball Pivoting algorithm, which constructs meshes from a point cloud. When the new method is applied to the Microsoft's "Ballet" and "Break Dancer" data sets, PSNR comparison shows that about 0.18~1.19 increasement. This paper will explaing the new algorithm and the experiment method and results.

  • PDF

Vision-based Food Shape Recognition and Its Positioning for Automated Production of Custom Cakes (주문형 케이크 제작 자동화를 위한 영상 기반 식품 모양 인식 및 측위)

  • Oh, Jang-Sub;Lee, Jaesung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.10
    • /
    • pp.1280-1287
    • /
    • 2020
  • This paper proposes a vision-based food recognition method for automated production of custom cakes. A small camera module mounted on a food art printer recognizes objects' shape and estimates their center points through image processing. Through the perspective transformation, the top-view image is obtained from the original image taken at an oblique position. The line and circular hough transformations are applied to recognize square and circular shapes respectively. In addition, the center of gravity of each figure are accurately detected in units of pixels. The test results show that the shape recognition rate is more than 98.75% under 180 ~ 250 lux of light and the positioning error rate is less than 0.87% under 50 ~ 120 lux. These values sufficiently meet the needs of the corresponding market. In addition, the processing delay is also less than 0.5 seconds per frame, so the proposed algorithm is suitable for commercial purpose.

A Study on the Automated Payment System for Artificial Intelligence-Based Product Recognition in the Age of Contactless Services

  • Kim, Heeyoung;Hong, Hotak;Ryu, Gihwan;Kim, Dongmin
    • International Journal of Advanced Culture Technology
    • /
    • v.9 no.2
    • /
    • pp.100-105
    • /
    • 2021
  • Contactless service is rapidly emerging as a new growth strategy due to consumers who are reluctant to the face-to-face situation in the global pandemic of coronavirus disease 2019 (COVID-19), and various technologies are being developed to support the fast-growing contactless service market. In particular, the restaurant industry is one of the most desperate industrial fields requiring technologies for contactless service, and the representative technical case should be a kiosk, which has the advantage of reducing labor costs for the restaurant owners and provides psychological relaxation and satisfaction to the customer. In this paper, we propose a solution to the restaurant's store operation through the unmanned kiosk using a state-of-the-art artificial intelligence (AI) technology of image recognition. Especially, for the products that do not have barcodes in bakeries, fresh foods (fruits, vegetables, etc.), and autonomous restaurants on highways, which cause increased labor costs and many hassles, our proposed system should be very useful. The proposed system recognizes products without barcodes on the ground of image-based AI algorithm technology and makes automatic payments. To test the proposed system feasibility, we established an AI vision system using a commercial camera and conducted an image recognition test by training object detection AI models using donut images. The proposed system has a self-learning system with mismatched information in operation. The self-learning AI technology allows us to upgrade the recognition performance continuously. We proposed a fully automated payment system with AI vision technology and showed system feasibility by the performance test. The system realizes contactless service for self-checkout in the restaurant business area and improves the cost-saving in managing human resources.

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.

Contact Detection based on Relative Distance Prediction using Deep Learning-based Object Detection (딥러닝 기반의 객체 검출을 이용한 상대적 거리 예측 및 접촉 감지)

  • Hong, Seok-Mi;Sun, Kyunghee;Yoo, Hyun
    • Journal of Convergence for Information Technology
    • /
    • v.12 no.1
    • /
    • pp.39-44
    • /
    • 2022
  • The purpose of this study is to extract the type, location, and absolute size of an object in an image using a deep learning algorithm, predict the relative distance between objects, and use this to detect contact between objects. To analyze the size ratio of objects, YOLO, a CNN-based object detection algorithm, is used. Through the YOLO algorithm, the absolute size and position of an object are extracted in the form of coordinates. The extraction result extracts the ratio between the size in the image and the actual size from the standard object-size list having the same object name and size stored in advance, and predicts the relative distance between the camera and the object in the image. Based on the predicted value, it detects whether the objects are in contact.

Distortion-guided Module for Image Deblurring (왜곡 정보 모듈을 이용한 이미지 디블러 방법)

  • Kim, Jeonghwan;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.351-360
    • /
    • 2022
  • Image blurring is a phenomenon that occurs due to factors such as movement of a subject and shaking of a camera. Recently, the research for image deblurring has been actively conducted based on convolution neural networks. In particular, the method of guiding the restoration process via the difference between blur and sharp images has shown the promising performance. This paper proposes a novel method for improving the deblurring performance based on the distortion information. To this end, the transformer-based neural network module is designed to guide the restoration process. The proposed method efficiently reflects the distorted region, which is predicted through the global inference during the deblurring process. We demonstrate the efficiency and robustness of the proposed module based on experimental results with various deblurring architectures and benchmark datasets.

Face Detection Method based Fusion RetinaNet using RGB-D Image (RGB-D 영상을 이용한 Fusion RetinaNet 기반 얼굴 검출 방법)

  • Nam, Eun-Jeong;Nam, Chung-Hyeon;Jang, Kyung-Sik
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.4
    • /
    • pp.519-525
    • /
    • 2022
  • The face detection task of detecting a person's face in an image is used as a preprocess or core process in various image processing-based applications. The neural network models, which have recently been performing well with the development of deep learning, are dependent on 2D images, so if noise occurs in the image, such as poor camera quality or pool focus of the face, the face may not be detected properly. In this paper, we propose a face detection method that uses depth information together to reduce the dependence of 2D images. The proposed model was trained after generating and preprocessing depth information in advance using face detection dataset, and as a result, it was confirmed that the FRN model was 89.16%, which was about 1.2% better than the RetinaNet model, which showed 87.95%.

Estimation of two-dimensional position of soybean crop for developing weeding robot (제초로봇 개발을 위한 2차원 콩 작물 위치 자동검출)

  • SooHyun Cho;ChungYeol Lee;HeeJong Jeong;SeungWoo Kang;DaeHyun Lee
    • Journal of Drive and Control
    • /
    • v.20 no.2
    • /
    • pp.15-23
    • /
    • 2023
  • In this study, two-dimensional location of crops for auto weeding was detected using deep learning. To construct a dataset for soybean detection, an image-capturing system was developed using a mono camera and single-board computer and the system was mounted on a weeding robot to collect soybean images. A dataset was constructed by extracting RoI (region of interest) from the raw image and each sample was labeled with soybean and the background for classification learning. The deep learning model consisted of four convolutional layers and was trained with a weakly supervised learning method that can provide object localization only using image-level labeling. Localization of the soybean area can be visualized via CAM and the two-dimensional position of the soybean was estimated by clustering the pixels associated with the soybean area and transforming the pixel coordinates to world coordinates. The actual position, which is determined manually as pixel coordinates in the image was evaluated and performances were 6.6(X-axis), 5.1(Y-axis) and 1.2(X-axis), 2.2(Y-axis) for MSE and RMSE about world coordinates, respectively. From the results, we confirmed that the center position of the soybean area derived through deep learning was sufficient for use in automatic weeding systems.

Design of Face with Mask Detection System in Thermal Images Using Deep Learning (딥러닝을 이용한 열영상 기반 마스크 검출 시스템 설계)

  • Yong Joong Kim;Byung Sang Choi;Ki Seop Lee;Kyung Kwon Jung
    • Convergence Security Journal
    • /
    • v.22 no.2
    • /
    • pp.21-26
    • /
    • 2022
  • Wearing face masks is an effective measure to prevent COVID-19 infection. Infrared thermal image based temperature measurement and identity recognition system has been widely used in many large enterprises and universities in China, so it is totally necessary to research the face mask detection of thermal infrared imaging. Recently introduced MTCNN (Multi-task Cascaded Convolutional Networks)presents a conceptually simple, flexible, general framework for instance segmentation of objects. In this paper, we propose an algorithm for efficiently searching objects of images, while creating a segmentation of heat generation part for an instance which is a heating element in a heat sensed image acquired from a thermal infrared camera. This method called a mask MTCNN is an algorithm that extends MTCNN by adding a branch for predicting an object mask in parallel with an existing branch for recognition of a bounding box. It is easy to generalize the R-CNN to other tasks. In this paper, we proposed an infrared image detection algorithm based on R-CNN and detect heating elements which can not be distinguished by RGB images.