• Title/Summary/Keyword: object detection system

Search Result 1,079, Processing Time 0.027 seconds

Automatic Pedestrian Removal Algorithm Using Multiple Frames (다중 프레임에서의 보행자 검출 및 삭제 알고리즘)

  • Kim, ChangSeong;Lee, DongSuk;Park, Dong Sun
    • Smart Media Journal
    • /
    • v.4 no.2
    • /
    • pp.26-33
    • /
    • 2015
  • In this paper, we propose an efficient automatic pedestrian removal system from a frame in a video sequence. It firstly finds pedestrians from the frame using a Histogram of Oriented Gradient(HOG) / Linear-Support Vector Machine(L-SVM) classifier, searches for proper background patches, and then the patches are used to replace the deleted pedestrians. Background patches are retrieved from the reference video sequence and a modified feather blender algorithm is applied to make boundaries of replaced blocks look naturally. The proposed system, is designed to automatically detect object and generate natural-looking patches, while most existing systems provide search operation in manual. In the experiment, the average PSNR of the replaced blocks is 19.246

Fast Scene Understanding in Urban Environments for an Autonomous Vehicle equipped with 2D Laser Scanners (무인 자동차의 2차원 레이저 거리 센서를 이용한 도시 환경에서의 빠른 주변 환경 인식 방법)

  • Ahn, Seung-Uk;Choe, Yun-Geun;Chung, Myung-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.7 no.2
    • /
    • pp.92-100
    • /
    • 2012
  • A map of complex environment can be generated using a robot carrying sensors. However, representation of environments directly using the integration of sensor data tells only spatial existence. In order to execute high-level applications, robots need semantic knowledge of the environments. This research investigates the design of a system for recognizing objects in 3D point clouds of urban environments. The proposed system is decomposed into five steps: sequential LIDAR scan, point classification, ground detection and elimination, segmentation, and object classification. This method could classify the various objects in urban environment, such as cars, trees, buildings, posts, etc. The simple methods minimizing time-consuming process are developed to guarantee real-time performance and to perform data classification on-the-fly as data is being acquired. To evaluate performance of the proposed methods, computation time and recognition rate are analyzed. Experimental results demonstrate that the proposed algorithm has efficiency in fast understanding the semantic knowledge of a dynamic urban environment.

3-Dimensional Simulation for the Design of Automated Container Terminal (자동화 컨테이너터미널의 설계를 위한 3차원 시뮬레이션)

  • 최용석;하태영;양창호
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2004.04a
    • /
    • pp.471-477
    • /
    • 2004
  • In this study, we introduce a 3-dimensional simulation to support the Design on ACT(Automated Container Terminal). This simulation system developed to simulate virtual operations of ACT using 3-dimensional simulation and animate the simulated results with real time. And the developed system applied an object-oriented design and C++ programming to increase the reusability and extensibility. We select several items of performance evaluation for objects used in ACT in terms of problem detection, problem forecast, and logic feasibility, and provide evaluation points for the design of ACT.

  • PDF

Hardware implementation of CIE1931 color coordinate system transformation for color correction (색상 보정을 위한 CIE1931 색좌표계 변환의 하드웨어 구현)

  • Lee, Seung-min;Park, Sangwook;Kang, Bong-Soon
    • Journal of IKEEE
    • /
    • v.24 no.2
    • /
    • pp.502-506
    • /
    • 2020
  • With the development of autonomous driving technology, the importance of object recognition technology is increasing. Haze removal is required because the hazy weather reduces visibility and detectability in object recognition. However, the image from which the haze has been removed cannot properly reflect the unique color, and a detection error occurs. In this paper, we use CIE1931 color coordinate system to extend or reduce the color area to provide algorithms and hardware that reflect the colors of the real world. In addition, we will implement hardware capable of real-time processing in a 4K environment as the image media develops. This hardware was written in Verilog and implemented on the SoC verification board.

Simulation of Traffic Signal Control with Adaptive Priority Order through Object Extraction in Images (영상에서 객체 추출을 통한 적응형 통행 우선순위 교통신호 제어 시뮬레이션)

  • Youn, Jae-Hong;Ji, Yoo-Kang
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.8
    • /
    • pp.1051-1058
    • /
    • 2008
  • The advancement of technology for image processing and communications makes it possible for current traffic signal controllers and vehicle detection technology to make both emergency vehicle preemption and transit priority strategies as a part of integrated system. Present]y traffic signal control in crosswalk is controlled by fixed signals. The signal control keeps regular signals traffic even with no traffic, when there is traffic, should wait until the signal is given. Waiting time causes the risk of traffic accidents and traffic congestion in accordance with signal violation. To help reduce the risk of accidents and congestion, this paper explains traffic signal control system for the adaptive priority order so that signal may be preferentially given in accordance with the situation of site through the object detect images.

  • PDF

Hierarchical Object Recognition Algorithm Based on Kalman Filter for Adaptive Cruise Control System Using Scanning Laser

  • Eom, Tae-Dok;Lee, Ju-Jang
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1998.10a
    • /
    • pp.496-500
    • /
    • 1998
  • Not merely running at the designated constant speed as the classical cruise control, the adaptive cruise control (ACC) maintains safe headway distance when the front is blocked by other vehicles. One of the most essential part of ACC System is the range sensor which can measure the position and speed of all objects in front continuously, ignore all irrelevant objects, distinguish vehicles in different lanes and lock on to the closest vehicle in the same lane. In this paper, the hierarchical object recognition algorithm (HORA) is proposed to process raw scanning laser data and acquire valid distance to target vehicle. HORA contains two principal concepts. First, the concept of life quantifies the reliability of range data to filter off the spurious detection and preserve the missing target position. Second, the concept of conformation checks the mobility of each obstacle and tracks the position shift. To estimate and predict the vehicle position Kalman filter is used. Repeatedly updated covariance matrix determines the bound of valid data. The algorithm is emulated on computer and tested on-line with our ACC vehicle.

  • PDF

Obstacle Detection and Recognition System for Autonomous Driving Vehicle (자율주행차를 위한 장애물 탐지 및 인식 시스템)

  • Han, Ju-Chan;Koo, Bon-Cheol;Cheoi, Kyung-Joo
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.229-235
    • /
    • 2017
  • In recent years, research has been actively carried out to recognize and recognize objects based on a large amount of data. In this paper, we propose a system that extracts objects that are thought to be obstacles in road driving images and recognizes them by car, man, and motorcycle. The objects were extracted using Optical Flow in consideration of the direction and size of the moving objects. The extracted objects were recognized using Alexnet, one of CNN (Convolutional Neural Network) recognition models. For the experiment, various images on the road were collected and experimented with black box. The result of the experiment showed that the object extraction accuracy was 92% and the object recognition accuracy was 96%.

Traffic Signal Recognition System Based on Color and Time for Visually Impaired

  • P. Kamakshi
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.4
    • /
    • pp.48-54
    • /
    • 2023
  • Nowadays, a blind man finds it very difficult to cross the roads. They should be very vigilant with every step they take. To resolve this problem, Convolutional Neural Networks(CNN) is a best method to analyse the data and automate the model without intervention of human being. In this work, a traffic signal recognition system is designed using CNN for the visually impaired. To provide a safe walking environment, a voice message is given according to light state and timer state at that instance. The developed model consists of two phases, in the first phase the CNN model is trained to classify different images captured from traffic signals. Common Objects in Context (COCO) labelled dataset is used, which includes images of different classes like traffic lights, bicycles, cars etc. The traffic light object will be detected using this labelled dataset with help of object detection model. The CNN model detects the color of the traffic light and timer displayed on the traffic image. In the second phase, from the detected color of the light and timer value a text message is generated and sent to the text-to-speech conversion model to make voice guidance for the blind person. The developed traffic light recognition model recognizes traffic light color and countdown timer displayed on the signal for safe signal crossing. The countdown timer displayed on the signal was not considered in existing models which is very useful. The proposed model has given accurate results in different scenarios when compared to other models.

Development of a deep-learning based tunnel incident detection system on CCTVs (딥러닝 기반 터널 영상유고감지 시스템 개발 연구)

  • Shin, Hyu-Soung;Lee, Kyu-Beom;Yim, Min-Jin;Kim, Dong-Gyou
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.19 no.6
    • /
    • pp.915-936
    • /
    • 2017
  • In this study, current status of Korean hazard mitigation guideline for tunnel operation is summarized. It shows that requirement for CCTV installation has been gradually stricted and needs for tunnel incident detection system in conjunction with the CCTV in tunnels have been highly increased. Despite of this, it is noticed that mathematical algorithm based incident detection system, which are commonly applied in current tunnel operation, show very low detectable rates by less than 50%. The putative major reasons seem to be (1) very weak intensity of illumination (2) dust in tunnel (3) low installation height of CCTV to about 3.5 m, etc. Therefore, an attempt in this study is made to develop an deep-learning based tunnel incident detection system, which is relatively insensitive to very poor visibility conditions. Its theoretical background is given and validating investigation are undertaken focused on the moving vehicles and person out of vehicle in tunnel, which are the official major objects to be detected. Two scenarios are set up: (1) training and prediction in the same tunnel (2) training in a tunnel and prediction in the other tunnel. From the both cases, targeted object detection in prediction mode are achieved to detectable rate to higher than 80% in case of similar time period between training and prediction but it shows a bit low detectable rate to 40% when the prediction times are far from the training time without further training taking place. However, it is believed that the AI based system would be enhanced in its predictability automatically as further training are followed with accumulated CCTV BigData without any revision or calibration of the incident detection system.

A Collaborative Video Annotation and Browsing System using Linked Data (링크드 데이터를 이용한 협업적 비디오 어노테이션 및 브라우징 시스템)

  • Lee, Yeon-Ho;Oh, Kyeong-Jin;Sean, Vi-Sal;Jo, Geun-Sik
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.203-219
    • /
    • 2011
  • Previously common users just want to watch the video contents without any specific requirements or purposes. However, in today's life while watching video user attempts to know and discover more about things that appear on the video. Therefore, the requirements for finding multimedia or browsing information of objects that users want, are spreading with the increasing use of multimedia such as videos which are not only available on the internet-capable devices such as computers but also on smart TV and smart phone. In order to meet the users. requirements, labor-intensive annotation of objects in video contents is inevitable. For this reason, many researchers have actively studied about methods of annotating the object that appear on the video. In keyword-based annotation related information of the object that appeared on the video content is immediately added and annotation data including all related information about the object must be individually managed. Users will have to directly input all related information to the object. Consequently, when a user browses for information that related to the object, user can only find and get limited resources that solely exists in annotated data. Also, in order to place annotation for objects user's huge workload is required. To cope with reducing user's workload and to minimize the work involved in annotation, in existing object-based annotation automatic annotation is being attempted using computer vision techniques like object detection, recognition and tracking. By using such computer vision techniques a wide variety of objects that appears on the video content must be all detected and recognized. But until now it is still a problem facing some difficulties which have to deal with automated annotation. To overcome these difficulties, we propose a system which consists of two modules. The first module is the annotation module that enables many annotators to collaboratively annotate the objects in the video content in order to access the semantic data using Linked Data. Annotation data managed by annotation server is represented using ontology so that the information can easily be shared and extended. Since annotation data does not include all the relevant information of the object, existing objects in Linked Data and objects that appear in the video content simply connect with each other to get all the related information of the object. In other words, annotation data which contains only URI and metadata like position, time and size are stored on the annotation sever. So when user needs other related information about the object, all of that information is retrieved from Linked Data through its relevant URI. The second module enables viewers to browse interesting information about the object using annotation data which is collaboratively generated by many users while watching video. With this system, through simple user interaction the query is automatically generated and all the related information is retrieved from Linked Data and finally all the additional information of the object is offered to the user. With this study, in the future of Semantic Web environment our proposed system is expected to establish a better video content service environment by offering users relevant information about the objects that appear on the screen of any internet-capable devices such as PC, smart TV or smart phone.