• Title/Summary/Keyword: vision-based technology

Search Result 1,063, Processing Time 0.038 seconds

Scientometrics-based R&D Topography Analysis to Identify Research Trends Related to Image Segmentation (이미지 분할(image segmentation) 관련 연구 동향 파악을 위한 과학계량학 기반 연구개발지형도 분석)

  • Young-Chan Kim;Byoung-Sam Jin;Young-Chul Bae
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.27 no.3
    • /
    • pp.563-572
    • /
    • 2024
  • Image processing and computer vision technologies are becoming increasingly important in a variety of application fields that require techniques and tools for sophisticated image analysis. In particular, image segmentation is a technology that plays an important role in image analysis. In this study, in order to identify recent research trends on image segmentation techniques, we used the Web of Science(WoS) database to analyze the R&D topography based on the network structure of the author's keyword co-occurrence matrix. As a result, from 2015 to 2023, as a result of the analysis of the R&D map of research articles on image segmentation, R&D in this field is largely focused on four areas of research and development: (1) researches on collecting and preprocessing image data to build higher-performance image segmentation models, (2) the researches on image segmentation using statistics-based models or machine learning algorithms, (3) the researches on image segmentation for medical image analysis, and (4) deep learning-based image segmentation-related R&D. The scientometrics-based analysis performed in this study can not only map the trajectory of R&D related to image segmentation, but can also serve as a marker for future exploration in this dynamic field.

Research on Deep Learning-Based Methods for Determining Negligence through Traffic Accident Video Analysis (교통사고 영상 분석을 통한 과실 판단을 위한 딥러닝 기반 방법 연구)

  • Seo-Young Lee;Yeon-Hwi You;Hyo-Gyeong Park;Byeong-Ju Park;Il-Young Moon
    • Journal of Advanced Navigation Technology
    • /
    • v.28 no.4
    • /
    • pp.559-565
    • /
    • 2024
  • Research on autonomous vehicles is being actively conducted. As autonomous vehicles emerge, there will be a transitional period in which traditional and autonomous vehicles coexist, potentially leading to a higher accident rate. Currently, when a traffic accident occurs, the fault ratio is determined according to the criteria set by the General Insurance Association of Korea. However, the time required to investigate the type of accident is substantial. Additionally, there is an increasing trend in fault ratio disputes, with requests for reconsideration even after the fault ratio has been determined. To reduce these temporal and material costs, we propose a deep learning model that automatically determines fault ratios. In this study, we aimed to determine fault ratios based on accident video through a image classification model based on ResNet-18 and video action recognition using TSN. If this model commercialized, could significantly reduce the time required to measure fault ratios. Moreover, it provides an objective metric for fault ratios that can be offered to the parties involved, potentially alleviating fault ratio disputes.

Color Image Query Using Hierachical Search by Region of Interest with Color Indexing

  • Sombutkaew, Rattikorn;Chitsobhuk, Orachat
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.810-813
    • /
    • 2004
  • Indexing and Retrieving images from large and varied collections using image content as a key is a challenging and important problem in computer vision application. In this paper, a color Content-based Image Retrieval (CBIR) system using hierarchical Region of Interest (ROI) query and indexing is presented. During indexing process, First, The ROIs on every image in the image database are extracted using a region-based image segmentation technique, The JSEG approach is selected to handle this problem in order to create color-texture regions. Then, Color features in form of histogram and correlogram are then extracted from each segmented regions. Finally, The features are stored in the database as the key to retrieve the relevant images. As in the retrieval system, users are allowed to select ROI directly over the sample or user's submission image and the query process then focuses on the content of the selected ROI in order to find those images containing similar regions from the database. The hierarchical region-of-interest query is performed to retrieve the similar images. Two-level search is exploited in this paper. In the first level, the most important regions, usually the large regions at the center of user's query, are used to retrieve images having similar regions using static search. This ensures that we can retrieve all the images having the most important regions. In the second level, all the remaining regions in user's query are used to search from all the retrieved images obtained from the first level. The experimental results using the indexing technique show good retrieval performance over a variety of image collections, also great reduction in the amount of searching time.

  • PDF

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.

Mobile Robot Control using Hand Shape Recognition (손 모양 인식을 이용한 모바일 로봇제어)

  • Kim, Young-Rae;Kim, Eun-Yi;Chang, Jae-Sik;Park, Se-Hyun
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.4
    • /
    • pp.34-40
    • /
    • 2008
  • This paper presents a vision based walking robot control system using hand shape recognition. To recognize hand shapes, the accurate hand boundary needs to be tracked in image obtained from moving camera. For this, we use an active contour model-based tracking approach with mean shift which reduces dependency of the active contour model to location of initial curve. The proposed system is composed of four modules: a hand detector, a hand tracker, a hand shape recognizer and a robot controller. The hand detector detects a skin color region, which has a specific shape, as hand in an image. Then, the hand tracking is performed using an active contour model with mean shift. Thereafter the hand shape recognition is performed using Hue moments. To assess the validity of the proposed system we tested the proposed system to a walking robot, RCB-1. The experimental results show the effectiveness of the proposed system.

A Method for Eliminating Aiming Error of Unguided Anti-Tank Rocket Using Improved Target Tracking (향상된 표적 추적 기법을 이용한 무유도 대전차 로켓의 조준 오차 제거 방법)

  • Song, Jin-Mo;Kim, Tae-Wan;Park, Tai-Sun;Do, Joo-Cheol;Bae, Jong-sue
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.21 no.1
    • /
    • pp.47-60
    • /
    • 2018
  • In this paper, we proposed a method for eliminating aiming error of unguided anti-tank rocket using improved target tracking. Since predicted fire is necessary to hit moving targets with unguided rockets, a method was proposed to estimate the position and velocity of target using fire control system. However, such a method has a problem that the hit rate may be lowered due to the aiming error of the shooter. In order to solve this problem, we used an image-based target tracking method to correct error caused by the shooter. We also proposed a robust tracking method based on TLD(Tracking Learning Detection) considering characteristics of the FCS(Fire Control System) devices. To verify the performance of our proposed algorithm, we measured the target velocity using GPS and compared it with our estimation. It is proved that our method is robust to shooter's aiming error.

Active Facial Tracking for Fatigue Detection (피로 검출을 위한 능동적 얼굴 추적)

  • 박호식;정연숙;손동주;나상동;배철수
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2004.05b
    • /
    • pp.603-607
    • /
    • 2004
  • The vision-based driver fatigue detection is one of the most prospective commercial applications of facial expression recognition technology. The facial feature tracking is the primary technique issue in it. Current facial tracking technology faces three challenges: (1) detection failure of some or all of features due to a variety of lighting conditions and head motions; (2) multiple and non-rigid object tracking and (3) features occlusion when the head is in oblique angles. In this paper, we propose a new active approach. First, the active IR sensor is used to robustly detect pupils under variable lighting conditions. The detected pupils are then used to predict the head motion. Furthermore, face movement is assumed to be locally smooth so that a facial feature can be tracked with a Kalman filter. The simultaneous use of the pupil constraint and the Kalman filtering greatly increases the prediction accuracy for each feature position. Feature detection is accomplished in the Gabor space with respect to the vicinity of predicted location. Local graphs consisting of identified features are extracted and used to capture the spatial relationship among detected features. Finally, a graph-based reliability propagation is proposed to tackle the occlusion problem and verify the tracking results. The experimental results show validity of our active approach to real-life facial tracking under variable lighting conditions, head orientations, and facial expressions.

  • PDF

Convergence-Information Strategy between Big Data and Wearable Computing (빅데이터와 웨어러블 컴퓨팅의 융합정보화 전략)

  • Lee, Tae-Gyu;Shin, Seong-Yoon;Lee, Hyun-Chang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.05a
    • /
    • pp.218-220
    • /
    • 2014
  • Data economy era is rapidly approaching where big data plays the pivotal role of creating new values and solving various problems. This paper aims at designing Korea's new strategic direction of informatization in the big data age. For this purpose, paradigm shift of our society and the new role of IT together with the discussion on open platform and big data focused on its potentials and new possibilities are analyzed, which leads to the conclusion that big data will be a main engine for creating new values. Based on the results of the analysis, three kinds of strategic direction is designed. The first direction is on national vision making and 'data analysis-based creative nation' is suggested. The second direction is on catalyst making and 'smart government utilizing the power of big data' is proposed in details. The third direction is on sustainable leading mechanism and 'collaborative governance between stakeholders' is suggested.

  • PDF

Mechanistic insight into the progressive retinal atrophy disease in dogs via pathway-based genome-wide association analysis

  • Sheet, Sunirmal;Krishnamoorthy, Srikanth;Park, Woncheoul;Lim, Dajeong;Park, Jong-Eun;Ko, Minjeong;Choi, Bong-Hwan
    • Journal of Animal Science and Technology
    • /
    • v.62 no.6
    • /
    • pp.765-776
    • /
    • 2020
  • The retinal degenerative disease, progressive retinal atrophy (PRA) is a major reason of vision impairment in canine population. Canine PRA signifies an inherently dissimilar category of retinal dystrophies which has solid resemblances to human retinis pigmentosa. Even though much is known about the biology of PRA, the knowledge about the intricate connection among genetic loci, genes and pathways associated to this disease in dogs are still remain unknown. Therefore, we have performed a genome wide association study (GWAS) to identify susceptibility single nucleotide polymorphisms (SNPs) of PRA. The GWAS was performed using a case-control based association analysis method on PRA dataset of 129 dogs and 135,553 markers. Further, the gene-set and pathway analysis were conducted in this study. A total of 1,114 markers associations with PRA trait at p < 0.01 were extracted and mapped to 640 unique genes, and then selected significant (p < 0.05) enriched 35 gene ontology (GO) terms and 5 Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways contain these genes. In particular, apoptosis process, homophilic cell adhesion, calcium ion binding, and endoplasmic reticulum GO terms as well as pathways related to focal adhesion, cyclic guanosine monophosphate)-protein kinase G signaling, and axon guidance were more likely associated to the PRA disease in dogs. These data could provide new insight for further research on identification of potential genes and causative pathways for PRA in dogs.

Object Detection Based on Deep Learning Model for Two Stage Tracking with Pest Behavior Patterns in Soybean (Glycine max (L.) Merr.)

  • Yu-Hyeon Park;Junyong Song;Sang-Gyu Kim ;Tae-Hwan Jun
    • Proceedings of the Korean Society of Crop Science Conference
    • /
    • 2022.10a
    • /
    • pp.89-89
    • /
    • 2022
  • Soybean (Glycine max (L.) Merr.) is a representative food resource. To preserve the integrity of soybean, it is necessary to protect soybean yield and seed quality from threats of various pests and diseases. Riptortus pedestris is a well-known insect pest that causes the greatest loss of soybean yield in South Korea. This pest not only directly reduces yields but also causes disorders and diseases in plant growth. Unfortunately, no resistant soybean resources have been reported. Therefore, it is necessary to identify the distribution and movement of Riptortus pedestris at an early stage to reduce the damage caused by insect pests. Conventionally, the human eye has performed the diagnosis of agronomic traits related to pest outbreaks. However, due to human vision's subjectivity and impermanence, it is time-consuming, requires the assistance of specialists, and is labor-intensive. Therefore, the responses and behavior patterns of Riptortus pedestris to the scent of mixture R were visualized with a 3D model through the perspective of artificial intelligence. The movement patterns of Riptortus pedestris was analyzed by using time-series image data. In addition, classification was performed through visual analysis based on a deep learning model. In the object tracking, implemented using the YOLO series model, the path of the movement of pests shows a negative reaction to a mixture Rina video scene. As a result of 3D modeling using the x, y, and z-axis of the tracked objects, 80% of the subjects showed behavioral patterns consistent with the treatment of mixture R. In addition, these studies are being conducted in the soybean field and it will be possible to preserve the yield of soybeans through the application of a pest control platform to the early stage of soybeans.

  • PDF