• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.027 seconds

A Trend of Social TV System for Research and Implementation (소셜 TV 시스템의 연구 개발 동향)

  • Kim, Yu-Doo;Moon, Il-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.73-75
    • /
    • 2010
  • Current communication and broadcasting services are moved to integrated convergence service. So IPTV(Internet Protocol based TeleVision) services are growth rapidly. Traditional TV were provide simplex service. but IPTV will provide duplex service that is possible of interaction with TV and audience. Therefore current researches are focusing the Social TV service. at this point, we see the trend of Social TV system for research and implementation.

  • PDF

Object Recognition and Pose Estimation Based on Deep Learning for Visual Servoing (비주얼 서보잉을 위한 딥러닝 기반 물체 인식 및 자세 추정)

  • Cho, Jaemin;Kang, Sang Seung;Kim, Kye Kyung
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.1-7
    • /
    • 2019
  • Recently, smart factories have attracted much attention as a result of the 4th Industrial Revolution. Existing factory automation technologies are generally designed for simple repetition without using vision sensors. Even small object assemblies are still dependent on manual work. To satisfy the needs for replacing the existing system with new technology such as bin picking and visual servoing, precision and real-time application should be core. Therefore in our work we focused on the core elements by using deep learning algorithm to detect and classify the target object for real-time and analyzing the object features. We chose YOLO CNN which is capable of real-time working and combining the two tasks as mentioned above though there are lots of good deep learning algorithms such as Mask R-CNN and Fast R-CNN. Then through the line and inside features extracted from target object, we can obtain final outline and estimate object posture.

Implementation of Moving Object Recognition based on Deep Learning (딥러닝을 통한 움직이는 객체 검출 알고리즘 구현)

  • Lee, YuKyong;Lee, Yong-Hwan
    • Journal of the Semiconductor & Display Technology
    • /
    • v.17 no.2
    • /
    • pp.67-70
    • /
    • 2018
  • Object detection and tracking is an exciting and interesting research area in the field of computer vision, and its technologies have been widely used in various application systems such as surveillance, military, and augmented reality. This paper proposes and implements a novel and more robust object recognition and tracking system to localize and track multiple objects from input images, which estimates target state using the likelihoods obtained from multiple CNNs. As the experimental result, the proposed algorithm is effective to handle multi-modal target appearances and other exceptions.

A study on Detecting the Safety helmet wearing using YOLOv5-S model and transfer learning

  • Kwak, NaeJoung;Kim, DongJu
    • International Journal of Advanced Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.302-309
    • /
    • 2022
  • Occupational safety accidents are caused by various factors, and it is difficult to predict when and why they occur, and it is directly related to the lives of workers, so the interest in safety accidents is increasing every year. Therefore, in order to reduce safety accidents at industrial fields, workers are required to wear personal protective equipment. In this paper, we proposes a method to automatically check whether workers are wearing safety helmets among the protective equipment in the industrial field. It detects whether or not the helmet is worn using YOLOv5, a computer vision-based deep learning object detection algorithm. We transfer learning the s model among Yolov5 models with different learning rates and epochs, evaluate the performance, and select the optimal model. The selected model showed a performance of 0.959 mAP.

Generation of wind turbine blade surface defect dataset based on StyleGAN3 and PBGMs

  • W.R. Li;W.H. Zhao;T.T. Wang;Y.F. Du
    • Smart Structures and Systems
    • /
    • v.34 no.2
    • /
    • pp.129-143
    • /
    • 2024
  • In recent years, with the vigorous development of visual algorithms, a large amount of research has been conducted on blade surface defect detection methods represented by deep learning. Detection methods based on deep learning models must rely on a large and rich dataset. However, the geographical location and working environment of wind turbines makes it difficult to effectively capture images of blade surface defects, which inevitably hinders visual detection. In response to the challenge of collecting a dataset for surface defects that are difficult to obtain, a multi-class blade surface defect generation method based on the StyleGAN3 (Style Generative Adversarial Networks 3) deep learning model and PBGMs (Physics-Based Graphics Models) method has been proposed. Firstly, a small number of real blade surface defect datasets are trained using the adversarial neural network of the StyleGAN3 deep learning model to generate a large number of high-resolution blade surface defect images. Secondly, the generated images are processed through Matting and Resize operations to create defect foreground images. The blade background images produced using PBGM technology are randomly fused, resulting in a diverse and high-resolution blade surface defect dataset with multiple types of backgrounds. Finally, experimental validation has proven that the adoption of this method can generate images with defect characteristics and high resolution, achieving a proportion of over 98.5%. Additionally, utilizing the EISeg annotation method significantly reduces the annotation time to just 1/7 of the time required for traditional methods. These generated images and annotated data of blade surface defects provide robust support for the detection of blade surface defects.

Development of a Localization System Based on VLC Technique for an Indoor Environment

  • Yi, Keon Young;Kim, Dae Young;Yi, Kwang Moo
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.1
    • /
    • pp.436-442
    • /
    • 2015
  • In this paper, we develop an indoor localization device which embeds localization information into indoor light-emitting-diodes (LED) lighting systems. The key idea of our device is the use of the newly proposed "bit stuffing method". Through the use of stuff bits, our device is able to measure signal strengths even in transient states, which prohibits interference between lighting signals. The stuff bits also scatter the parts of the signal where the LED is turned on, thus provides quality indoor lighting. Additionally, for the indoor localization system based on RSSI and TDM to be practical, we propose methods for the control of LED lamps and compensation of received signals. The effectiveness of the proposed scheme is validated through experiments with a low-cost implementation including an indoor navigation task.

Line feature extraction in a noisy image

  • Lee, Joon-Woong;Oh, Hak-Seo;Kweon, In-So
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10a
    • /
    • pp.137-140
    • /
    • 1996
  • Finding line segments in an intensity image has been one of the most fundamental issues in computer vision. In complex scenes, it is hard to detect the locations of point features. Line features are more robust in providing greater positional accuracy. In this paper we present a robust "line features extraction" algorithm which extracts line feature in a single pass without using any assumptions and constraints. Our algorithm consists of five steps: (1) edge scanning, (2) edge normalization, (3) line-blob extraction, (4) line-feature computation, and (5) line linking. By using edge scanning, the computational complexity due to too many edge pixels is drastically reduced. Edge normalization improves the local quantization error induced from the gradient space partitioning and minimizes perturbations on edge orientation. We also analyze the effects of edge processing, and the least squares-based method and the principal axis-based method on the computation of line orientation. We show its efficiency with some real images.al images.

  • PDF

Motion Detection Model Based on PCNN

  • Yoshida, Minoru;Tanaka, Masaru;Kurita, Takio
    • Proceedings of the IEEK Conference
    • /
    • 2002.07a
    • /
    • pp.273-276
    • /
    • 2002
  • Pulse-Coupled Neural Network (PCNN), which can explain the synchronous burst of neurons in a cat visual cortex, is a fundamental model for the biomimetic vision. The PCNN is a kind of pulse coded neural network models. In order to get deep understanding of the visual information Processing, it is important to simulate the visual system through such biologically plausible neural network model. In this paper, we construct the motion detection model based on the PCNN with the receptive field models of neurons in the lateral geniculate nucleus and the primary visual cortex. Then it is shown that this motion detection model can detect the movements and the direction of motion effectively.

  • PDF

Adaptive Bayesian Object Tracking with Histograms of Dense Local Image Descriptors

  • Kim, Minyoung
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.16 no.2
    • /
    • pp.104-110
    • /
    • 2016
  • Dense local image descriptors like SIFT are fruitful for capturing salient information about image, shown to be successful in various image-related tasks when formed in bag-of-words representation (i.e., histograms). In this paper we consider to utilize these dense local descriptors in the object tracking problem. A notable aspect of our tracker is that instead of adopting a point estimate for the target model, we account for uncertainty in data noise and model incompleteness by maintaining a distribution over plausible candidate models within the Bayesian framework. The target model is also updated adaptively by the principled Bayesian posterior inference, which admits a closed form within our Dirichlet prior modeling. With empirical evaluations on some video datasets, the proposed method is shown to yield more accurate tracking than baseline histogram-based trackers with the same types of features, often being superior to the appearance-based (visual) trackers.

Object Recognition Using 3D RFID System (3D REID 시스템을 이용한 사물 인식)

  • Roh Se-gon;Lee Young Hoon;Choi Hyouk Ryeol
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.12
    • /
    • pp.1027-1038
    • /
    • 2005
  • Object recognition in the field of robotics generally has depended on a computer vision system. Recently, RFID(Radio Frequency IDentification) has been suggested as technology that supports object recognition. This paper, introduces the advanced RFID-based recognition using a novel tag which is named a 3D tag. The 3D tag was designed to facilitate object recognition. The proposed RFID system not only detects the existence of an object, but also estimates the orientation and position of the object. These characteristics allow the robot to reduce considerably its dependence on other sensors for object recognition. In this paper, we analyze the characteristics of the 3D tag-based RFID system. In addition, the estimation methods of position and orientation using the system are discussed.