• Title/Summary/Keyword: Vision Based Sensor

Search Result 425, Processing Time 0.032 seconds

Improved Environment Recognition Algorithms for Autonomous Vehicle Control (자율주행 제어를 위한 향상된 주변환경 인식 알고리즘)

  • Bae, Inhwan;Kim, Yeounghoo;Kim, Taekyung;Oh, Minho;Ju, Hyunsu;Kim, Seulki;Shin, Gwanjun;Yoon, Sunjae;Lee, Chaejin;Lim, Yongseob;Choi, Gyeungho
    • Journal of Auto-vehicle Safety Association
    • /
    • v.11 no.2
    • /
    • pp.35-43
    • /
    • 2019
  • This paper describes the improved environment recognition algorithms using some type of sensors like LiDAR and cameras. Additionally, integrated control algorithm for an autonomous vehicle is included. The integrated algorithm was based on C++ environment and supported the stability of the whole driving control algorithms. As to the improved vision algorithms, lane tracing and traffic sign recognition were mainly operated with three cameras. There are two algorithms developed for lane tracing, Improved Lane Tracing (ILT) and Histogram Extension (HIX). Two independent algorithms were combined into one algorithm - Enhanced Lane Tracing with Histogram Extension (ELIX). As for the enhanced traffic sign recognition algorithm, integrated Mutual Validation Procedure (MVP) by using three algorithms - Cascade, Reinforced DSIFT SVM and YOLO was developed. Comparing to the results for those, it is convincing that the precision of traffic sign recognition is substantially increased. With the LiDAR sensor, static and dynamic obstacle detection and obstacle avoidance algorithms were focused. Therefore, improved environment recognition algorithms, which are higher accuracy and faster processing speed than ones of the previous algorithms, were proposed. Moreover, by optimizing with integrated control algorithm, the memory issue of irregular system shutdown was prevented. Therefore, the maneuvering stability of the autonomous vehicle in severe environment were enhanced.

A method of improving the quality of 3D images acquired from RGB-depth camera (깊이 영상 카메라로부터 획득된 3D 영상의 품질 향상 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.5
    • /
    • pp.637-644
    • /
    • 2021
  • In general, in the fields of computer vision, robotics, and augmented reality, the importance of 3D space and 3D object detection and recognition technology has emerged. In particular, since it is possible to acquire RGB images and depth images in real time through an image sensor using Microsoft Kinect method, many changes have been made to object detection, tracking and recognition studies. In this paper, we propose a method to improve the quality of 3D reconstructed images by processing images acquired through a depth-based (RGB-Depth) camera on a multi-view camera system. In this paper, a method of removing noise outside an object by applying a mask acquired from a color image and a method of applying a combined filtering operation to obtain the difference in depth information between pixels inside the object is proposed. Through each experiment result, it was confirmed that the proposed method can effectively remove noise and improve the quality of 3D reconstructed image.

IoT Based Intelligent Position and Posture Control of Home Wellness Robots (홈 웰니스 로봇의 사물인터넷 기반 지능형 자기 위치 및 자세 제어)

  • Lee, Byoungsu;Hyun, Chang-Ho;Kim, Seungwoo
    • Journal of IKEEE
    • /
    • v.18 no.4
    • /
    • pp.636-644
    • /
    • 2014
  • This paper is to technically implement the sensing platform for Home-Wellness Robot. First, self-localization technique is based on a smart home and object in a home environment, and IOT(Internet of Thing) between Home Wellness Robots. RF tag is set in a smart home and the absolute coordinate information is acquired by a object included RF reader. Then bluetooth communication between object and home wellness robot provides the absolute coordinate information to home wellness robot. After that, the relative coordinate of home wellness robot is found and self-localization through a stereo camera in a home wellness robot. Second, this paper proposed fuzzy control methode based on a vision sensor for approach object of home wellness robot. Based on a stereo camera equipped with face of home wellness robot, depth information to the object is extracted. Then figure out the angle difference between the object and home wellness robot by calculating a warped angle based on the center of the image. The obtained information is written Look-Up table and makes the attitude control for approaching object. Through the experimental with home wellness robot and the smart home environment, confirm performance about the proposed self-localization and posture control method respectively.

Inferring Pedestrian Level of Service for Pathways through Electrodermal Activity Monitoring

  • Lee, Heejung;Hwang, Sungjoo
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.1247-1248
    • /
    • 2022
  • Due to rapid urbanization and population growth, it has become crucial to analyze the various volumes and characteristics of pedestrian pathways to understand the capacity and level of service (LOS) for pathways to promote a better walking environment. Different indicators have been developed to measure pedestrian volume. The pedestrian level of service (PLOS), tailored to analyze pedestrian pathways based on the concept of the LOS in transportation in the Highway Capacity Manual, has been widely used. PLOS is a measurement concept used to assess the quality of pedestrian facilities, from grade A (best condition) to grade F (worst condition), based on the flow rate, average speed, occupied space, and other parameters. Since the original PLOS approach has been criticized for producing idealistic results, several modified versions of PLOS have also been developed. One of these modified versions is perceived PLOS, which measures the LOS for pathways by considering pedestrians' awareness levels. However, this method relies on survey-based measurements, making it difficult to continuously deploy the technique to all the pathways. To measure PLOS more quantitatively and continuously, researchers have adopted computer vision technologies to automatically assess pedestrian flows and PLOS from CCTV videos. However, there are drawbacks even with this method because CCTVs cannot be installed everywhere, e.g., in alleyways. Recently, a technique to monitor bio-signals, such as electrodermal activity (EDA), through wearable sensors that can measure physiological responses to external stimuli (e.g., when another pedestrian passes), has gained popularity. It has the potential to continuously measure perceived PLOS. In their previous experiment, the authors of this study found that there were many significant EDA responses in crowded places when other pedestrians acting as external stimuli passed by. Therefore, we hypothesized that the EDA responses would be significantly higher in places where relatively more dynamic objects pass, i.e., in crowded areas with low PLOS levels (e.g., level F). To this end, the authors conducted an experiment to confirm the validity of EDA in inferring the perceived PLOS. The EDA of the subjects was measured and analyzed while watching both the real-world and virtually created videos with different pedestrian volumes in a laboratory environment. The results showed the possibility of inferring the amount of pedestrian volume on the pathways by measuring the physiological reactions of pedestrians. Through further validation, the research outcome is expected to be used for EDA-based continuous measurement of perceived PLOS at the alley level, which will facilitate modifying the existing walking environments, e.g., constructing pathways with appropriate effective width based on pedestrian volume. Future research will examine the validity of the integrated use of EDA and acceleration signals to increase the accuracy of inferring the perceived PLOS by capturing both physiological and behavioral reactions when walking in a crowded area.

  • PDF

Study of Sensor Technology Analysis and Site Application Model for 3D-based Global Modeling of Construction Field (건설 시공현장의 3D기반 광대역 모델링을 위한 Sensor 기술 분석과 향후 현장적용 모델 연구)

  • Kwon, Hyuk-Do;Koh, Min-Hyeok;Yoon, Su-Won;Kwon, Soon-Wook;Chin, Sang-Yoon;Kim, Yea-Sang
    • Proceedings of the Korean Institute Of Construction Engineering and Management
    • /
    • 2007.11a
    • /
    • pp.938-942
    • /
    • 2007
  • The importance of process improvement under construction has arisen from recent issue, lower productivity in the construction site. The various 3D modeling program is utilized in the procedure of construction as an alternative solution. However, it's still shortage of the consideration about a specific technical application. The purpose of the study in this paper is helpful to improve the productivity of construction site using 3D realization of constructing place as one of extensive modeling technologies, which leads to not only efficient management of construction site allowing people to check the real time situation in the place but also the revitalization of information flow about building process control and prgress, Therefore, I research into modeling algorithm and extensive construction site realization technology. 3D realization of building place would reduce the safety concerns by providing the real time information about construction site, and it could help to access easily to similar project through collecting and appling the database of sites. Furthermore it can be an opportunity to develop the procedure of production in construction industry and to upgrade the image of this field.

  • PDF

Design and Implementation of the Stop line and Crosswalk Recognition Algorithm for Autonomous UGV (자율 주행 UGV를 위한 정지선과 횡단보도 인식 알고리즘 설계 및 구현)

  • Lee, Jae Hwan;Yoon, Heebyung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.3
    • /
    • pp.271-278
    • /
    • 2014
  • In spite of that stop line and crosswalk should be aware of the most basic objects in transportation system, its features extracted are very limited. In addition to image-based recognition technology, laser and RF, GPS/INS recognition technology, it is difficult to recognize. For this reason, the limited research in this area has been done. In this paper, the algorithm to recognize the stop line and crosswalk is designed and implemented using image-based recognition technology with the images input through a vision sensor. This algorithm consists of three functions.; One is to select the area, in advance, needed for feature extraction in order to speed up the data processing, 'Region of Interest', another is to process the images only that white color is detected more than a certain proportion in order to remove the unnecessary operation, 'Color Pattern Inspection', the other is 'Feature Extraction and Recognition', which is to extract the edge features and compare this to the previously-modeled one to identify the stop line and crosswalk. For this, especially by using case based feature comparison algorithm, it can identify either both stop line and crosswalk exist or just one exists. Also the proposed algorithm is to develop existing researches by comparing and analysing effect of in-vehicle camera installation and changes in recognition rate of distance estimation and various constraints such as backlight and shadow.

RFID Based Mobile Robot Docking Using Estimated DOA (방향 측정 RFID를 이용한 로봇 이동 시스템)

  • Kim, Myungsik;Kim, Kwangsoo
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37C no.9
    • /
    • pp.802-810
    • /
    • 2012
  • This paper describes RFID(Radio Frequency Identification) based target acquisition and docking system. RFID is non-contact identification system, which can send relatively large amount of information using RF signal. Robot employing RFID reader can identify neighboring tag attached objects without any other sensing or supporting systems such as vision sensor. However, the current RFID does not provide spatial information of the identified object, the target docking problem remains in order to execute a task in a real environment. For the problem, the direction sensing RFID reader is developed using a dual-directional antenna. The dual-directional antenna is an antenna set, which is composed of perpendicularly positioned two identical directional antennas. By comparing the received signal strength in each antenna, the robot can know the DOA (Direction of Arrival) of transmitted RF signal. In practice, the DOA estimation poses a significant technical challenge, since the RF signal is easily distorted by the surrounded environmental conditions. Therefore, the robot loses its way to the target in an electromagnetically disturbed environment. For the problem, the g-filter based error correction algorithm is developed in this paper. The algorithm reduces the error using the difference of variances between current estimated and the previously filtered directions. The simulation and experiment results clearly demonstrate that the robot equipped with the developed system can successfully dock to a target tag in obstacles-cluttered environment.

Images Grouping Technology based on Camera Sensors for Efficient Stitching of Multiple Images (다수의 영상간 효율적인 스티칭을 위한 카메라 센서 정보 기반 영상 그룹핑 기술)

  • Im, Jiheon;Lee, Euisang;Kim, Hoejung;Kim, Kyuheon
    • Journal of Broadcast Engineering
    • /
    • v.22 no.6
    • /
    • pp.713-723
    • /
    • 2017
  • Since the panoramic image can overcome the limitation of the viewing angle of the camera and have a wide field of view, it has been studied effectively in the fields of computer vision and stereo camera. In order to generate a panoramic image, stitching images taken by a plurality of general cameras instead of using a wide-angle camera, which is distorted, is widely used because it can reduce image distortion. The image stitching technique creates descriptors of feature points extracted from multiple images, compares the similarities of feature points, and links them together into one image. Each feature point has several hundreds of dimensions of information, and data processing time increases as more images are stitched. In particular, when a panorama is generated on the basis of an image photographed by a plurality of unspecified cameras with respect to an object, the extraction processing time of the overlapping feature points for similar images becomes longer. In this paper, we propose a preprocessing process to efficiently process stitching based on an image obtained from a number of unspecified cameras for one object or environment. In this way, the data processing time can be reduced by pre-grouping images based on camera sensor information and reducing the number of images to be stitched at one time. Later, stitching is done hierarchically to create one large panorama. Through the grouping preprocessing proposed in this paper, we confirmed that the stitching time for a large number of images is greatly reduced by experimental results.

A Study on u-CCTV Fire Prevention System Development of System and Fire Judgement (u-CCTV 화재 감시 시스템 개발을 위한 시스템 및 화재 판별 기술 연구)

  • Kim, Young-Hyuk;Lim, Il-Kwon;Li, Qigui;Park, So-A;Kim, Myung-Jin;Lee, Jae-Kwang
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2010.05a
    • /
    • pp.463-466
    • /
    • 2010
  • In this paper, CCTV based fire surveillance system should aim to development. Advantages and Disadvantages analyzed of Existing sensor-based fire surveillance system and video-based fire surveillance system. To national support U-City, U-Home, U-Campus, etc, spread the ubiquitous environment appropriate to fire surveillance system model and a fire judgement technology. For this study, Microsoft LifeCam VX-1000 using through the capturing images and analyzed for apple and tomato, Finally we used H.264. The client uses the Linux OS with ARM9 S3C2440 board was manufactured, the client's role is passed to the server to processed capturing image. Client and the server is basically a 1:1 video communications. So to multiple receive to video multicast support will be a specification. Is fire surveillance system designed for multiple video communication. Video data from the RGB format to YUV format and transfer and fire detection for Y value. Y value is know movement data. The red color of the fire is determined to detect and calculate the value of Y at the fire continues to detect the movement of flame.

  • PDF

Comparative Study on Feature Extraction Schemes for Feature-based Structural Displacement Measurement (특징점 추출 기법에 따른 구조물 동적 변위 측정 성능에 관한 연구)

  • Junho Gong
    • Journal of the Korea institute for structural maintenance and inspection
    • /
    • v.28 no.3
    • /
    • pp.74-82
    • /
    • 2024
  • In this study, feature point detection and displacement measurement performance depending on feature extraction algorithms were compared and analyzed according to environmental changes and target types in the feature point-based displacement measurement algorithm. A three-story frame structure was designed for performance evaluation, and the displacement response of the structure was digitized into FHD (1920×1080) resolution. For performance analysis, the initial measurement distance was set to 10m, and increased up to 40m with an increment of 10m. During the experiments, illuminance was fixed to 450lux or 120lux. The artificial and natural targets mounted on the structure were set as regions of interest and used for feature point detection. Various feature detection algorithms were implemented for performance comparisons. As a result of the feature point detection performance analysis, the Shi-Tomasi corner and KAZE algorithm were found that they were robust to the target type, illuminance change, and increase in measurement distance. The displacement measurement accuracy using those two algorithms was also the highest. However, when using natural targets, the displacement measurement accuracy is lower than that of artificial targets. This indicated the limitation in extracting feature points as the resolution of the natural target decreased as the measurement distance increased.