• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.035 seconds

Trends on Visual Object Tracking Using Siamese Network (Siamese 네트워크 기반 영상 객체 추적 기술 동향)

  • Oh, J.;Lee, J.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.73-83
    • /
    • 2022
  • Visual object tracking can be utilized in various applications and has attracted considerable attention in the field of computer vision. Visual object tracking technology is classified in various ways based on the number of tracking objects and the methodologies employed for tracking algorithms. This report briefly introduces the visual object tracking challenge that contributes to the development of single object tracking technology. Furthermore, we review ten Siamese network-based algorithms that have attracted attention, owing to their high tracking speed (despite the use of neural networks). In addition, we discuss the prospects of the Siamese network-based object tracking algorithms.

Image Processing and Deep Learning-based Defect Detection Theory for Sapphire Epi-Wafer in Green LED Manufacturing

  • Suk Ju Ko;Ji Woo Kim;Ji Su Woo;Sang Jeen Hong;Garam Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.2
    • /
    • pp.81-86
    • /
    • 2023
  • Recently, there has been an increased demand for light-emitting diode (LED) due to the growing emphasis on environmental protection. However, the use of GaN-based sapphire in LED manufacturing leads to the generation of defects, such as dislocations caused by lattice mismatch, which ultimately reduces the luminous efficiency of LEDs. Moreover, most inspections for LED semiconductors focus on evaluating the luminous efficiency after packaging. To address these challenges, this paper aims to detect defects at the wafer stage, which could potentially improve the manufacturing process and reduce costs. To achieve this, image processing and deep learning-based defect detection techniques for Sapphire Epi-Wafer used in Green LED manufacturing were developed and compared. Through performance evaluation of each algorithm, it was found that the deep learning approach outperformed the image processing approach in terms of detection accuracy and efficiency.

  • PDF

Nozzle Swing Angle Measurement Involving Weighted Uncertainty of Feature Points Based on Rotation Parameters

  • Liang Wei;Ju Huo;Chen Cai
    • Current Optics and Photonics
    • /
    • v.8 no.3
    • /
    • pp.300-306
    • /
    • 2024
  • To solve the nozzle swing angle non-contact measurement problem, we present a nozzle pose estimation algorithm involving weighted measurement uncertainty based on rotation parameters. Firstly, the instantaneous axis of the rocket nozzle is constructed and used to model the pivot point and the nozzle coordinate system. Then, the rotation matrix and translation vector are parameterized by Cayley-Gibbs-Rodriguez parameters, and the novel object space collinearity error equation involving weighted measurement uncertainty of feature points is constructed. The nozzle pose is obtained at this step by the Gröbner basis method. Finally, the swing angle is calculated based on the conversion relationship between the nozzle static coordinate system and the nozzle dynamic coordinate system. Experimental results prove the high accuracy and robustness of the proposed method. In the space of 1.5 m × 1.5 m × 1.5 m, the maximum angle error of nozzle swing is 0.103°.

Real-Time Instance Segmentation Method Based on Location Attention

  • Li Liu;Yuqi Kong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.9
    • /
    • pp.2483-2494
    • /
    • 2024
  • Instance segmentation is a challenging research in the field of computer vision, which combines the prediction results of object detection and semantic segmentation to provide richer image feature information. Focusing on the instance segmentation in the street scene, the real-time instance segmentation method based on SOLOv2 is proposed in this paper. First, a cross-stage fusion backbone network based on position attention is designed to increase the model accuracy and reduce the computational effort. Then, the loss of shallow location information is decreased by integrating two-way feature pyramid networks. Meanwhile, cross-stage mask feature fusion is designed to resolve the small objects missed segmentation. Finally, the adaptive minimum loss matching method is proposed to decrease the loss of segmentation accuracy due to object occlusion in the image. Compared with other mainstream methods, our method meets the real-time segmentation requirements and achieves competitive performance in segmentation accuracy.

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.

Development of Mobile Camera Vision System and Build of Wire.Wireless Integration ERP System (모바일 카메라 비전 시스템 개발과 유.무선 통합 ERP 시스템 구축)

  • Lee, Hyae-Jung;Shin, Hyun-Cheol;Joung, Suck-Tae
    • Convergence Security Journal
    • /
    • v.7 no.2
    • /
    • pp.81-89
    • /
    • 2007
  • Mobile computing environment that support so that can offer employees inside information that enterprise has always improves business productivity and fetches efficiency enlargement. In this paper, limit model that can process ERP information by real-time as easy and convenient always utilizing radio network and PDA, Mobile camera based on Mobile vision concept. Calculable information between seller and customer is supplied supplying real-time brand image and information by practical use of Enterprise Resource Planning doing based on Mobile. Technical development and commercialization that utilize mobility, enforcement stronghold, portability etc. that is advantage of Mobile communication are required. In this paper, mobility of precious metals.jewel field that use portable terminal equipment taking advantage of a Mobile technology is secured. Constructed Mobile vision system that satisfy photography and bar-code scan at the same time from Mobile camera.

  • PDF

An Estimation Methodology of Empirical Flow-density Diagram Using Vision Sensor-based Probe Vehicles' Time Headway Data (개별 차량의 비전 센서 기반 차두 시간 데이터를 활용한 경험적 교통류 모형 추정 방법론)

  • Kim, Dong Min;Shim, Jisup
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.21 no.2
    • /
    • pp.17-32
    • /
    • 2022
  • This study explored an approach to estimate a flow-density diagram(FD) on a link in highway traffic environment by utilizing probe vehicles' time headway records. To study empirical flow-density diagram(EFD), the probe vehicles with vision sensors were recruited for collecting driving records for nine months and the vision sensor data pre-processing and GIS-based map matching were implemented. Then, we examined the new EFDs to evaluate validity with reference diagrams which is derived from loop detection traffic data. The probability distributions of time headway and distance headway as well as standard deviation of flow and density were utilized in examination. As a result, it turned out that the main factors for estimation errors are the limited number of probe vehicles and bias of flow status. We finally suggest a method to improve the accuracy of EFD model.

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

CV-Based Mobile Application to Enhance Real-time Safety Monitoring of Ladder Activities

  • Muhammad Sibtain Abbas;Nasrullah Khan;Syed Farhan Alam Zaidi;Rahat Hussain;Aqsa Sabir;Doyeop Lee;Chansik Park
    • International conference on construction engineering and project management
    • /
    • 2024.07a
    • /
    • pp.1057-1064
    • /
    • 2024
  • The construction industry has witnessed a concerning rise in ladder-related accidents, necessitating the implementation of stricter safety measures. Recent statistics highlight a substantial number of accidents occurring while using ladders, emphasizing the mandatory need for preventative measures. While prior research has explored computer vision-based automatic monitoring for specific aspects such as ladder stability with and without outriggers, worker height, and helmet usage, this study extends existing frameworks by introducing a rule set for co-workers. The research methodology involves training a YOLOv5 model on a comprehensive dataset to detect both the worker on the ladder and the presence of co-workers in real time. The aim is to enable smooth integration of the detector into a mobile application, serving as a portable real-time monitoring tool for safety managers. This mobile application functions as a general safety tool, considering not only conventional risk factors but also ensuring the presence of a co-worker when a worker reaches a specific height. The application offers users an intuitive interface, utilizing the device's camera to identify and verify the presence of coworkers during ladder activities. By combining computer vision technology with mobile applications, this study presents an innovative approach to ladder safety that prioritizes real-time, on-site co-worker verification, thereby significantly reducing the risk of accidents in construction environments. With an overall mean average precision (mAP) of 97.5 percent, the trained model demonstrates its effectiveness in detecting unsafe worker behavior within a construction environment.

New Medical Image Fusion Approach with Coding Based on SCD in Wireless Sensor Network

  • Zhang, De-gan;Wang, Xiang;Song, Xiao-dong
    • Journal of Electrical Engineering and Technology
    • /
    • v.10 no.6
    • /
    • pp.2384-2392
    • /
    • 2015
  • The technical development and practical applications of big-data for health is one hot topic under the banner of big-data. Big-data medical image fusion is one of key problems. A new fusion approach with coding based on Spherical Coordinate Domain (SCD) in Wireless Sensor Network (WSN) for big-data medical image is proposed in this paper. In this approach, the three high-frequency coefficients in wavelet domain of medical image are pre-processed. This pre-processing strategy can reduce the redundant ratio of big-data medical image. Firstly, the high-frequency coefficients are transformed to the spherical coordinate domain to reduce the correlation in the same scale. Then, a multi-scale model product (MSMP) is used to control the shrinkage function so as to make the small wavelet coefficients and some noise removed. The high-frequency parts in spherical coordinate domain are coded by improved SPIHT algorithm. Finally, based on the multi-scale edge of medical image, it can be fused and reconstructed. Experimental results indicate the novel approach is effective and very useful for transmission of big-data medical image(especially, in the wireless environment).