• Title/Summary/Keyword: Object-based Video Recognition

Search Result 108, Processing Time 0.198 seconds

Method of extracting context from media data by using video sharing site

  • Kondoh, Satoshi;Ogawa, Takeshi
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2009.01a
    • /
    • pp.709-713
    • /
    • 2009
  • Recently, a lot of research that applies data acquired from devices such as cameras and RFIDs to context aware services is being performed in the field on Life-Log and the sensor network. A variety of analytical techniques has been proposed to recognize various information from the raw data because video and audio data include a larger volume of information than other sensor data. However, manually watching a huge amount of media data again has been necessary to create supervised data for the update of a class or the addition of a new class because these techniques generally use supervised learning. Therefore, the problem was that applications were able to use only recognition function based on fixed supervised data in most cases. Then, we proposed a method of acquiring supervised data from a video sharing site where users give comments on any video scene because those sites are remarkably popular and, therefore, many comments are generated. In the first step of this method, words with a high utility value are extracted by filtering the comment about the video. Second, the set of feature data in the time series is calculated by applying functions, which extract various feature data, to media data. Finally, our learning system calculates the correlation coefficient by using the above-mentioned two kinds of data, and the correlation coefficient is stored in the DB of the system. Various other applications contain a recognition function that is used to generate collective intelligence based on Web comments, by applying this correlation coefficient to new media data. In addition, flexible recognition that adjusts to a new object becomes possible by regularly acquiring and learning both media data and comments from a video sharing site while reducing work by manual operation. As a result, recognition of not only the name of the seen object but also indirect information, e.g. the impression or the action toward the object, was enabled.

  • PDF

ASM Algorithm Applid to Image Object spFACS Study on Face Recognition (영상객체 spFACS ASM 알고리즘을 적용한 얼굴인식에 관한 연구)

  • Choi, Byungkwan
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.1-12
    • /
    • 2016
  • Digital imaging technology has developed into a state-of-the-art IT convergence, composite industry beyond the limits of the multimedia industry, especially in the field of smart object recognition, face - Application developed various techniques have been actively studied in conjunction with the phone. Recently, face recognition technology through the object recognition technology and evolved into intelligent video detection recognition technology, image recognition technology object detection recognition process applies to skills through is applied to the IP camera, the image object recognition technology with face recognition and active research have. In this paper, we first propose the necessary technical elements of the human factor technology trends and look at the human object recognition based spFACS (Smile Progress Facial Action Coding System) for detecting smiles study plan of the image recognition technology recognizes objects. Study scheme 1). ASM algorithm. By suggesting ways to effectively evaluate psychological research skills through the image object 2). By applying the result via the face recognition object to the tooth area it is detected in accordance with the recognized facial expression recognition of a person demonstrated the effect of extracting the feature points.

Vanishing point-based 3D object detection method for improving traffic object recognition accuracy

  • Jeong-In, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.93-101
    • /
    • 2023
  • In this paper, we propose a method of creating a 3D bounding box for an object using a vanishing point to increase the accuracy of object recognition in an image when recognizing an traffic object using a video camera. Recently, when vehicles captured by a traffic video camera is to be detected using artificial intelligence, this 3D bounding box generation algorithm is applied. The vertical vanishing point (VP1) and horizontal vanishing point (VP2) are derived by analyzing the camera installation angle and the direction of the image captured by the camera, and based on this, the moving object in the video subject to analysis is specified. If this algorithm is applied, it is easy to detect object information such as the location, type, and size of the detected object, and when applied to a moving type such as a car, it is tracked to determine the location, coordinates, movement speed, and direction of each object by tracking it. Able to know. As a result of application to actual roads, tracking improved by 10%, in particular, the recognition rate and tracking of shaded areas (extremely small vehicle parts hidden by large cars) improved by 100%, and traffic data analysis accuracy was improved.

Optimization of Action Recognition based on Slowfast Deep Learning Model using RGB Video Data (RGB 비디오 데이터를 이용한 Slowfast 모델 기반 이상 행동 인식 최적화)

  • Jeong, Jae-Hyeok;Kim, Min-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.8
    • /
    • pp.1049-1058
    • /
    • 2022
  • HAR(Human Action Recognition) such as anomaly and object detection has become a trend in research field(s) that focus on utilizing Artificial Intelligence (AI) methods to analyze patterns of human action in crime-ridden area(s), media services, and industrial facilities. Especially, in real-time system(s) using video streaming data, HAR has become a more important AI-based research field in application development and many different research fields using HAR have currently been developed and improved. In this paper, we propose and analyze a deep-learning-based HAR that provides more efficient scheme(s) using an intelligent AI models, such system can be applied to media services using RGB video streaming data usage without feature extraction pre-processing. For the method, we adopt Slowfast based on the Deep Neural Network(DNN) model under an open dataset(HMDB-51 or UCF101) for improvement in prediction accuracy.

Recognition and tracking system of moving objects based on artificial neural network and PWM control

  • Sugisaka, M.
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.573-574
    • /
    • 1992
  • We developed a recognition and tracking system of moving objects. The system consists of one CCD video camera, two DC motors in horizontal and vertical axles with encoders, pluse width modulation(PWM) driving unit, 16 bit NEC 9801 microcomputer, and their interfaces. The recognition and tracking system is able to recognize shape and size of a moving object and is able to track the object within a certain range of errors. This paper presents the brief introduction of the recognition and tracking system developed in our laboratory.

  • PDF

Extensible Hierarchical Method of Detecting Interactive Actions for Video Understanding

  • Moon, Jinyoung;Jin, Junho;Kwon, Yongjin;Kang, Kyuchang;Park, Jongyoul;Park, Kyoung
    • ETRI Journal
    • /
    • v.39 no.4
    • /
    • pp.502-513
    • /
    • 2017
  • For video understanding, namely analyzing who did what in a video, actions along with objects are primary elements. Most studies on actions have handled recognition problems for a well-trimmed video and focused on enhancing their classification performance. However, action detection, including localization as well as recognition, is required because, in general, actions intersect in time and space. In addition, most studies have not considered extensibility for a newly added action that has been previously trained. Therefore, proposed in this paper is an extensible hierarchical method for detecting generic actions, which combine object movements and spatial relations between two objects, and inherited actions, which are determined by the related objects through an ontology and rule based methodology. The hierarchical design of the method enables it to detect any interactive actions based on the spatial relations between two objects. The method using object information achieves an F-measure of 90.27%. Moreover, this paper describes the extensibility of the method for a new action contained in a video from a video domain that is different from the dataset used.

Character Recognition and Search for Media Editing (미디어 편집을 위한 인물 식별 및 검색 기법)

  • Park, Yong-Suk;Kim, Hyun-Sik
    • Journal of Broadcast Engineering
    • /
    • v.27 no.4
    • /
    • pp.519-526
    • /
    • 2022
  • Identifying and searching for characters appearing in scenes during multimedia video editing is an arduous and time-consuming process. Applying artificial intelligence to labor-intensive media editing tasks can greatly reduce media production time, improving the creative process efficiency. In this paper, a method is proposed which combines existing artificial intelligence based techniques to automate character recognition and search tasks for video editing. Object detection, face detection, and pose estimation are used for character localization and face recognition and color space analysis are used to extract unique representation information.

Real-time Identification of Traffic Light and Road Sign for the Next Generation Video-Based Navigation System (차세대 실감 내비게이션을 위한 실시간 신호등 및 표지판 객체 인식)

  • Kim, Yong-Kwon;Lee, Ki-Sung;Cho, Seong-Ik;Park, Jeong-Ho;Choi, Kyoung-Ho
    • Journal of Korea Spatial Information System Society
    • /
    • v.10 no.2
    • /
    • pp.13-24
    • /
    • 2008
  • A next generation video based car navigation is researched to supplement the drawbacks of existed 2D based navigation and to provide the various services for safety driving. The components of this navigation system could be a load object database, identification module for load lines, and crossroad identification module, etc. In this paper, we proposed the traffic lights and road sign recognition method which can be effectively exploited for crossroad recognition in video-based car navigation systems. The method uses object color information and other spatial features in the video image. The results show average 90% recognition rate from 30m to 60m distance for traffic lights and 97% at 40-90m distance for load sign. The algorithm also achieves 46msec/frame processing time which also indicates the appropriateness of the algorithm in real-time processing.

  • PDF

FPGA-based Object Recognition System (FPGA기반 객체인식 시스템)

  • Shin, Seong-Yoon;Cho, Gwang-Hyun;Cho, Seung-Pyo;Shin, Kwang-Seong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.10a
    • /
    • pp.407-408
    • /
    • 2022
  • In this paper, we will look at the components of the FPGA-based object recognition system one by one. Let's take a look at each function of the components camera, DLM, service system, video output monitor, deep trainer software, and external deep learning software.

  • PDF

Real-Time Object Recognition for Children Education Applications based on Augmented Reality (증강현실 기반 아동 학습 어플리케이션을 위한 실시간 영상 인식)

  • Park, Kang-Kyu;Yi, Kang
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.1
    • /
    • pp.17-31
    • /
    • 2017
  • The aim of the paper is to present an object recognition method toward augmented reality system that utilizes existing education instruments that was designed without any consideration on image processing and recognition. The light reflection, sizes, shapes, and color range of the existing target education instruments are major hurdles to our object recognition. In addition, the real-time performance requirements on embedded devices and user experience constraints for children users are quite challenging issues to be solved for our image processing and object recognition approach. In order to meet these requirements we employed a method cascading light-weight weak classification methods that are complimentary each other to make a resultant complicated and highly accurate object classifier toward practically reasonable precision ratio. We implemented the proposed method and tested the performance by video with more than 11,700 frames of actual playing scenario. The experimental result showed 0.54% miss ratio and 1.35% false hit ratio.