• Title/Summary/Keyword: 이동객체인식

Search Result 179, Processing Time 0.027 seconds

GIS Information Generation for Electric Mobility Aids Based on Object Recognition Model (객체 인식 모델 기반 전동 이동 보조기용 GIS 정보 생성)

  • Je-Seung Woo;Sun-Gi Hong;Dong-Seok Park;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.4
    • /
    • pp.200-208
    • /
    • 2022
  • In this study, an automatic information collection system and geographic information construction algorithm for the transportation disadvantaged using electric mobility aids are implemented using an object recognition model. Recognizes objects that the disabled person encounters while moving, and acquires coordinate information. It provides an improved route selection map compared to the existing geographic information for the disabled. Data collection consists of a total of four layers including the HW layer. It collects image information and location information, transmits them to the server, recognizes, and extracts data necessary for geographic information generation through the process of classification. A driving experiment is conducted in an actual barrier-free zone, and during this process, it is confirmed how efficiently the algorithm for collecting actual data and generating geographic information is generated.The geographic information processing performance was confirmed to be 70.92 EA/s in the first round, 70.69 EA/s in the second round, and 70.98 EA/s in the third round, with an average of 70.86 EA/s in three experiments, and it took about 4 seconds to be reflected in the actual geographic information. From the experimental results, it was confirmed that the walking weak using electric mobility aids can drive safely using new geographic information provided faster than now.

Research on Object Detection Library Utilizing Spatial Mapping Function Between Stream Data In 3D Data-Based Area (3D 데이터 기반 영역의 stream data간 공간 mapping 기능 활용 객체 검출 라이브러리에 대한 연구)

  • Gyeong-Hyu Seok;So-Haeng Lee
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.3
    • /
    • pp.551-562
    • /
    • 2024
  • This study relates to a method and device for extracting and tracking moving objects. In particular, objects are extracted using different images between adjacent images, and the location information of the extracted object is continuously transmitted to provide accurate location information of at least one moving object. It relates to a method and device for extracting and tracking moving objects based on tracking moving objects. People tracking, which started as an expression of the interaction between people and computers, is used in many application fields such as robot learning, object counting, and surveillance systems. In particular, in the field of security systems, cameras are used to recognize and track people to automatically detect illegal activities. The importance of developing a surveillance system, that can detect, is increasing day by day.

Real-Time Object Recognition Using Local Features (지역 특징을 사용한 실시간 객체인식)

  • Kim, Dae-Hoon;Hwang, Een-Jun
    • Journal of IKEEE
    • /
    • v.14 no.3
    • /
    • pp.224-231
    • /
    • 2010
  • Automatic detection of objects in images has been one of core challenges in the areas such as computer vision and pattern analysis. Especially, with the recent deployment of personal mobile devices such as smart phone, such technology is required to be transported to them. Usually, these smart phone users are equipped with devices such as camera, GPS, and gyroscope and provide various services through user-friendly interface. However, the smart phones fail to give excellent performance due to limited system resources. In this paper, we propose a new scheme to improve object recognition performance based on pre-computation and simple local features. In the pre-processing, we first find several representative parts from similar type objects and classify them. In addition, we extract features from each classified part and train them using regression functions. For a given query image, we first find candidate representative parts and compare them with trained information to recognize objects. Through experiments, we have shown that our proposed scheme can achieve resonable performance.

A Study on the Vehicle Black Box with Accident Prevention (사고예방이 가능한 차량용 블랙박스 시스템에 관한 연구)

  • Kim, Kang Hyo;Moon, Hae Min;Shin, Ju Hyun;Pan, Sung Bum
    • Smart Media Journal
    • /
    • v.4 no.1
    • /
    • pp.39-43
    • /
    • 2015
  • A vehicle black box helps to investigate the cause of accident by recording time, and videos as wells as shock information of the time of accident Lately, intelligent black box with accident prevention as well as existing functions is being studied. This paper proposes an applicable algorithm for vehicle black boxes that prevent any accident likely to occur while a car is parked, like robbery, theft or hit-and-run. Proposed algorithm provides object recognition, face detection and alarm as the object approaches car. Tests on the algorithm prove that it can recognize an approaching object, identify and set alarm if needed, depending on each risk level.

Implementation of AI-based Object Recognition Model for Improving Driving Safety of Electric Mobility Aids (전동 이동 보조기기 주행 안전성 향상을 위한 AI기반 객체 인식 모델의 구현)

  • Je-Seung Woo;Sun-Gi Hong;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.23 no.3
    • /
    • pp.166-172
    • /
    • 2022
  • In this study, we photograph driving obstacle objects such as crosswalks, side spheres, manholes, braille blocks, partial ramps, temporary safety barriers, stairs, and inclined curb that hinder or cause inconvenience to the movement of the vulnerable using electric mobility aids. We develop an optimal AI model that classifies photographed objects and automatically recognizes them, and implement an algorithm that can efficiently determine obstacles in front of electric mobility aids. In order to enable object detection to be AI learning with high probability, the labeling form is labeled as a polygon form when building a dataset. It was developed using a Mask R-CNN model in Detectron2 framework that can detect objects labeled in the form of polygons. Image acquisition was conducted by dividing it into two groups: the general public and the transportation weak, and image information obtained in two areas of the test bed was secured. As for the parameter setting of the Mask R-CNN learning result, it was confirmed that the model learned with IMAGES_PER_BATCH: 2, BASE_LEARNING_RATE 0.001, MAX_ITERATION: 10,000 showed the highest performance at 68.532, so that the user can quickly and accurately recognize driving risks and obstacles.

Implementation of AI-based Object Recognition Model for Improving Driving Safety of Electric Mobility Aids (객체 인식 모델과 지면 투영기법을 활용한 영상 내 다중 객체의 위치 보정 알고리즘 구현)

  • Dong-Seok Park;Sun-Gi Hong;Jun-Mo Park
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.24 no.2
    • /
    • pp.119-125
    • /
    • 2023
  • In this study, we photograph driving obstacle objects such as crosswalks, side spheres, manholes, braille blocks, partial ramps, temporary safety barriers, stairs, and inclined curb that hinder or cause inconvenience to the movement of the vulnerable using electric mobility aids. We develop an optimal AI model that classifies photographed objects and automatically recognizes them, and implement an algorithm that can efficiently determine obstacles in front of electric mobility aids. In order to enable object detection to be AI learning with high probability, the labeling form is labeled as a polygon form when building a dataset. It was developed using a Mask R-CNN model in Detectron2 framework that can detect objects labeled in the form of polygons. Image acquisition was conducted by dividing it into two groups: the general public and the transportation weak, and image information obtained in two areas of the test bed was secured. As for the parameter setting of the Mask R-CNN learning result, it was confirmed that the model learned with IMAGES_PER_BATCH: 2, BASE_LEARNING_RATE 0.001, MAX_ITERATION: 10,000 showed the highest performance at 68.532, so that the user can quickly and accurately recognize driving risks and obstacles.

Practical Use of Assistive Technology for Bling People (시각장애인을 위한 보조기술의 활용)

  • Jang, Dai-Hyun
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2012.01a
    • /
    • pp.63-64
    • /
    • 2012
  • 본 논문은 보다 유연하고 추상적인 데이터를 사용자에게 전달하여 스스로가 물체를 인지하고 느낄 수 있도록 하는 공감각 현상에 대하여 연구하고 이 연구 결과를 응용하도록 한다. 오픈소스인 ReacTable을 이용하여 일종의 TAG 역할을 하는 특징적 그림을 명세코드화 한다. 그리고 이를 실제 숫자나 문자 혹은 객체 인식에 응용하도록 한다. 그리하여 시각적인 정보 없이 객체를 인식하고 사용할 수 있는 일상적인 생활이 가능하도록 하였다. 또한 가격이 이용가능성이 많으며 이동성과 휴대성을 동시에 고려한 ZigBee 무선 영상통신 기술을 통하여 영상을 서버나 휴대용단말기에 전송하는 방법을 사용하였다.

  • PDF

Supporting Intelligent Context-Awareness in Ubiquitous Sensor Networks with RFID (RFID 기반 유비쿼터스 센서 네트워크에서의 지능적 상황인지 지원)

  • Ko Kyone-Chul;Lee Dong-Wook;Ko Young-Bae
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2005.07a
    • /
    • pp.262-264
    • /
    • 2005
  • Radio Frequency Identification(RFID)시스템은 객체에 부착된 RFID 태그를 리더로 읽어 들여 정확하게 객체를 인식할 수 있어 상황 인지를 위한 중요한 정보를 제공할 수 있다. 그러나 태그와 리더간의 통신이 단일 홉으로 구성되어 있기 때문에 대상 지역의 전체를 인식하지 못하여, 상황인지를 위한 총체적 정보를 제공할 수 없는 문제를 가지고 있다. 본 논문에서는 최근 제안되고 있는 무선 센서 네트워크와 RFID 시스템의 결함을 통하여 RFID시스템이 대상 지역 전체를 인식할 수 있게 하고, 이 정보를 센서 네트워크의 센싱 정보와 함께 활용하여 지능적인 상황 인지가 가능한 RFID 기반 우선 센서네트워크의 지능적 상황 인지 지원 시스템을 제안한다. 제안된 시스템에서는 RFID 시스템의 객체 인지 능력을 활용할 수 있을 뿐만 아니라, RFID 시스템에 네트워킹 기능을 제공하여 지능적 상황인지를 가능하게 한다.

  • PDF

Development of System for Real-Time Object Recognition and Matching using Deep Learning at Simulated Lunar Surface Environment (딥러닝 기반 달 표면 모사 환경 실시간 객체 인식 및 매칭 시스템 개발)

  • Jong-Ho Na;Jun-Ho Gong;Su-Deuk Lee;Hyu-Soung Shin
    • Tunnel and Underground Space
    • /
    • v.33 no.4
    • /
    • pp.281-298
    • /
    • 2023
  • Continuous research efforts are being devoted to unmanned mobile platforms for lunar exploration. There is an ongoing demand for real-time information processing to accurately determine the positioning and mapping of areas of interest on the lunar surface. To apply deep learning processing and analysis techniques to practical rovers, research on software integration and optimization is imperative. In this study, a foundational investigation has been conducted on real-time analysis of virtual lunar base construction site images, aimed at automatically quantifying spatial information of key objects. This study involved transitioning from an existing region-based object recognition algorithm to a boundary box-based algorithm, thus enhancing object recognition accuracy and inference speed. To facilitate extensive data-based object matching training, the Batch Hard Triplet Mining technique was introduced, and research was conducted to optimize both training and inference processes. Furthermore, an improved software system for object recognition and identical object matching was integrated, accompanied by the development of visualization software for the automatic matching of identical objects within input images. Leveraging satellite simulative captured video data for training objects and moving object-captured video data for inference, training and inference for identical object matching were successfully executed. The outcomes of this research suggest the feasibility of implementing 3D spatial information based on continuous-capture video data of mobile platforms and utilizing it for positioning objects within regions of interest. As a result, these findings are expected to contribute to the integration of an automated on-site system for video-based construction monitoring and control of significant target objects within future lunar base construction sites.

Recognition method of multiple objects for virtual touch using depth information (깊이 정보를 이용한 가상 터치에서 다중 객체 인식 방법)

  • Kwon, Soon-Kak;Lee, Dong-Seok
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.21 no.1
    • /
    • pp.27-34
    • /
    • 2016
  • In this paper, we propose how to recognize a multi-touch in the virtual touch type. Virtual touch has an advantage that it is installed only simple depth camera compared to the physical touch manners and it can be implemented with low cost for extracting an object exactly from only the difference of the depth values between the object and background. However, the accuracy for implementing the multi-touch has lowered. This paper presents a method to increase the accuracy of the multi-touch through the algorithms of binarization, labelling, and object tracking for multi-object recognition. Simulation results show that the proposed method can provide a variety of multi-touch events.