• Title/Summary/Keyword: camera vision

Search Result 1,386, Processing Time 0.027 seconds

Development a Meal Support System for the Visually Impaired Using YOLO Algorithm (YOLO알고리즘을 활용한 시각장애인용 식사보조 시스템 개발)

  • Lee, Gun-Ho;Moon, Mi-Kyeong
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.5
    • /
    • pp.1001-1010
    • /
    • 2021
  • Normal people are not deeply aware of their dependence on sight when eating. However, since the visually impaired do not know what kind of food is on the table, the assistant next to them holds the blind spoon and explains the position of the food in a clockwise direction, front and rear, left and right, etc. In this paper, we describe the development of a meal assistance system that recognizes each food image and announces the name of the food by voice when a visually impaired person looks at their table using a smartphone camera. This system extracts the food on which the spoon is placed through the YOLO model that has learned the image of food and tableware (spoon), recognizes what the food is, and notifies it by voice. Through this system, it is expected that the visually impaired will be able to eat without the help of a meal assistant, thereby increasing their self-reliance and satisfaction.

Extraction of Workers and Heavy Equipment and Muliti-Object Tracking using Surveillance System in Construction Sites (건설 현장 CCTV 영상을 이용한 작업자와 중장비 추출 및 다중 객체 추적)

  • Cho, Young-Woon;Kang, Kyung-Su;Son, Bo-Sik;Ryu, Han-Guk
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.5
    • /
    • pp.397-408
    • /
    • 2021
  • The construction industry has the highest occupational accidents/injuries and has experienced the most fatalities among entire industries. Korean government installed surveillance camera systems at construction sites to reduce occupational accident rates. Construction safety managers are monitoring potential hazards at the sites through surveillance system; however, the human capability of monitoring surveillance system with their own eyes has critical issues. A long-time monitoring surveillance system causes high physical fatigue and has limitations in grasping all accidents in real-time. Therefore, this study aims to build a deep learning-based safety monitoring system that can obtain information on the recognition, location, identification of workers and heavy equipment in the construction sites by applying multiple object tracking with instance segmentation. To evaluate the system's performance, we utilized the Microsoft common objects in context and the multiple object tracking challenge metrics. These results prove that it is optimal for efficiently automating monitoring surveillance system task at construction sites.

Crowd Behavior Detection using Convolutional Neural Network (컨볼루션 뉴럴 네트워크를 이용한 군중 행동 감지)

  • Ullah, Waseem;Ullah, Fath U Min;Baik, Sung Wook;Lee, Mi Young
    • The Journal of Korean Institute of Next Generation Computing
    • /
    • v.15 no.6
    • /
    • pp.7-14
    • /
    • 2019
  • The automatic monitoring and detection of crowd behavior in the surveillance videos has obtained significant attention in the field of computer vision due to its vast applications such as security, safety and protection of assets etc. Also, the field of crowd analysis is growing upwards in the research community. For this purpose, it is very necessary to detect and analyze the crowd behavior. In this paper, we proposed a deep learning-based method which detects abnormal activities in surveillance cameras installed in a smart city. A fine-tuned VGG-16 model is trained on publicly available benchmark crowd dataset and is tested on real-time streaming. The CCTV camera captures the video stream, when abnormal activity is detected, an alert is generated and is sent to the nearest police station to take immediate action before further loss. We experimentally have proven that the proposed method outperforms over the existing state-of-the-art techniques.

Design Android-based image processing system using the Around-View (안드로이드 기반 영상처리를 이용한 Around-View 시스템 설계)

  • Kim, Gyu-Hyun;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.421-424
    • /
    • 2014
  • Currently, car black box, and CCTV products, such as image processing are prevalent on the market giving convenience to users.In particular, the black box of the driver driving a vehicle accident that occurred at the time to help identify the cause of the accident is gaining. Black box, the front or rear of the vehicle can check the image only. Because of the angle of view of the driver's vision or the black box can not determine a non-scene. In order to solve this problem by a more advanced system, the black box AVM (Around-View Monitoring) systems have been developed. AVM system to the vehicle's top-view images obtained before and after, left and right of the image, ie, $360^{\circ}$ image of the vehicle can be secured. AVM system must be installed on the vehicle, a desktop that you can acquire images Cling conditions. In this paper, we propose an Android-based tablet using the AVM system of the vehicle can achieve a $360^{\circ}$ image you want to design the system.

  • PDF

Tracking Method of Dynamic Smoke based on U-net (U-net기반 동적 연기 탐지 기법)

  • Gwak, Kyung-Min;Rho, Young J.
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.21 no.4
    • /
    • pp.81-87
    • /
    • 2021
  • Artificial intelligence technology is developing as it enters the fourth industrial revolution. Active researches are going on; visual-based models using CNNs. U-net is one of the visual-based models. It has shown strong performance for semantic segmentation. Although various U-net studies have been conducted, studies on tracking objects with unclear outlines such as gases and smokes are still insufficient. We conducted a U-net study to tackle this limitation. In this paper, we describe how 3D cameras are used to collect data. The data are organized into learning and test sets. This paper also describes how U-net is applied and how the results is validated.

A Remote Control of 6 d.o.f. Robot Arm Based on 2D Vision Sensor (2D 영상센서 기반 6축 로봇 팔 원격제어)

  • Hyun, Woong-Keun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.5
    • /
    • pp.933-940
    • /
    • 2022
  • In this paper, the algorithm was developed to recognize hand 3D position through 2D image sensor and implemented a system to remotely control the 6 d.o.f. robot arm by using it. The system consists of a camera that acquires hand position in 2D, a computer that controls robot arm that performs movement by hand position recognition. The image sensor recognizes the specific color of the glove putting on operator's hand and outputs the recognized range and position by including the color area of the glove as a shape of rectangle. We recognize the velocity vector of end effector and control the robot arm by the output data of the position and size of the detected rectangle. Through the several experiments using developed 6 axis robot, it was confirmed that the 6 d.o.f. robot arm remote control was successfully performed.

Non-contact mobile inspection system for tunnels: a review (터널의 비접촉 이동식 상태점검 장비: 리뷰)

  • Chulhee Lee;Donggyou Kim
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.25 no.3
    • /
    • pp.245-259
    • /
    • 2023
  • The purpose of this paper is to examine the most recent tunnel scanning systems to obtain insights for the development of non-contact mobile inspection system. Tunnel scanning systems are mostly being developed by adapting two main technologies, namely laser scanning and image scanning systems. Laser scanning system has the advantage of accurately recreating the geometric characteristics of tunnel linings from point cloud. On the other hand, image scanning system employs computer vision to effortlessly identify damage, such as fine cracks and leaks on the tunnel lining surface. The analysis suggests that image scanning system is more suitable for detecting damage on tunnel linings. A camera-based tunnel scanning system under development should include components such as lighting, data storage, power supply, and image-capturing controller synchronized with vehicle speed.

Estimation of liquid limit of cohesive soil using video-based vibration measurement

  • Matthew Sands;Evan Hayes;Soonkie Nam;Jinki Kim
    • Geomechanics and Engineering
    • /
    • v.33 no.2
    • /
    • pp.175-182
    • /
    • 2023
  • In general, the design of structures and its construction processes are fundamentally dependent on their foundation and supporting ground. Thus, it is imperative to understand the behavior of the soil under certain stress and drainage conditions. As it is well known that certain characteristics and behaviors of soils with fines are highly dependent on water content, it is critical to accurately measure and identify the status of the soils in terms of water contents. Liquid limit is one of the important soil index properties to define such characteristics. However, liquid limit measurement can be affected by the proficiency of the operator. On the other hand, dynamic properties of soils are also necessary in many different applications and current testing methods often require special equipment in the laboratory, which is often expensive and sensitive to test conditions. In order to address these concerns and advance the state of the art, this study explores a novel method to determine the liquid limit of cohesive soil by employing video-based vibration analysis. In this research, the modal characteristics of cohesive soil columns are extracted from videos by utilizing phase-based motion estimation. By utilizing the proposed method that analyzes the optical flow in every pixel of the series of frames that effectively represents the motion of corresponding points of the soil specimen, the vibration characteristics of the entire soil specimen could be assessed in a non-contact and non-destructive manner. The experimental investigation results compared with the liquid limit determined by the standard method verify that the proposed method reliably and straightforwardly identifies the liquid limit of clay. It is envisioned that the proposed approach could be applied to measuring liquid limit of soil in practical field, entertaining its simple implementation that only requires a digital camera or even a smartphone without the need for special equipment that may be subject to the proficiency of the operator.

The correction of Lens distortion based on Image division using Artificial Neural Network (영상분할 방법 기반의 인공신경망을 적용한 카메라의 렌즈왜곡 보정)

  • Shin, Ki-Young;Bae, Jang-Han;Mun, Joung-H.
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.4
    • /
    • pp.31-38
    • /
    • 2009
  • Lens distortion is inevitable phenomenon in machine vision system. More and more distortion phenomenon is occurring in order to choice of lens for minimizing cost and system size. As shown above, correction of lens distortion is critical issue. However previous lens correction methods using camera model have problem such as nonlinear property and complicated operation. And recent lens correction methods using neural network also have accuracy and efficiency problem. In this study, I propose new algorithms for correction of lens distortion. Distorted image is divided based on the distortion quantity using k-means. And each divided image region is corrected by using neural network. As a result, the proposed algorithms have better accuracy than previous methods without image division.

Joint Reasoning of Real-time Visual Risk Zone Identification and Numeric Checking for Construction Safety Management

  • Ali, Ahmed Khairadeen;Khan, Numan;Lee, Do Yeop;Park, Chansik
    • International conference on construction engineering and project management
    • /
    • 2020.12a
    • /
    • pp.313-322
    • /
    • 2020
  • The recognition of the risk hazards is a vital step to effectively prevent accidents on a construction site. The advanced development in computer vision systems and the availability of the large visual database related to construction site made it possible to take quick action in the event of human error and disaster situations that may occur during management supervision. Therefore, it is necessary to analyze the risk factors that need to be managed at the construction site and review appropriate and effective technical methods for each risk factor. This research focuses on analyzing Occupational Safety and Health Agency (OSHA) related to risk zone identification rules that can be adopted by the image recognition technology and classify their risk factors depending on the effective technical method. Therefore, this research developed a pattern-oriented classification of OSHA rules that can employ a large scale of safety hazard recognition. This research uses joint reasoning of risk zone Identification and numeric input by utilizing a stereo camera integrated with an image detection algorithm such as (YOLOv3) and Pyramid Stereo Matching Network (PSMNet). The research result identifies risk zones and raises alarm if a target object enters this zone. It also determines numerical information of a target, which recognizes the length, spacing, and angle of the target. Applying image detection joint logic algorithms might leverage the speed and accuracy of hazard detection due to merging more than one factor to prevent accidents in the job site.

  • PDF