• Title/Summary/Keyword: rear camera

Search Result 73, Processing Time 0.023 seconds

Characteristics of Temperature Distribution of Pen for Exhaust Fan of Ventilation System (돈사용 환기팬을 위한 돈사 내 온도 분포 특성)

  • Kim, Hyeon-Tae;Kim, Woong
    • Journal of Animal Environmental Science
    • /
    • v.20 no.4
    • /
    • pp.155-160
    • /
    • 2014
  • This study was researched for use by data for the improvement of ventilation system of optimum environmental control systems. The ventilation system for windowless swine housing was installed negative pressure system that circular pipe duct for inlet was installed on the ceiling and axial flow fan for exhaust was installed on the sidewall. The temperatures in the pen was measured using infrared thermography camera and thermocouple with data-logger. The temperature measurement points was selected by infrared thermography camera is alley (G), inlet (A), front-upper (B), front-lower (C), rear-upper (D), rear-lower (E), forward fan (F). The temperature measured at those selected points for temperature distribution was $28^{\circ}C$ that was maintained setting temperature in suitably. The temperature deviations of F point and A~E points in windowless swine housing was less then average $0.5^{\circ}C$. The result of air velocity of measured points was suitable to the breeding of pigs.

The Design of the Obstacle Avoidances System for Unmanned Vehicle Using a Depth Camera (깊이 카메라를 이용한 무인이동체의 장애물 회피 시스템 설계)

  • Kim, Min-Joon;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2016.10a
    • /
    • pp.224-226
    • /
    • 2016
  • With the technical development and rapid increase of private demand, the new market for unmanned vehicle combined with the characteristics of 'unmanned automation' and 'vehicle' is rapidly growing. Even though the pilot driving is currently allowed in some countries, there is no country that has institutionalized the formal driving of self-driving cars. In case of the existing vehicles, safety incidents are frequently happening due to the frequent malfunction of the rear sensor, blind spot of the rear camera, or drivers' carelessness. Once such minor flaws are complemented, the relevant regulations for the commercialization of self-driving car and small drone could be relieved. Contrary to the ultrasonic and laser sensors used for the existing vehicles, this paper aims to attempt the distance measurement by using the depth sensor. A depth camera calculates the distance data based on the TOF method calculating the time difference by lighting laser or infrared light onto an object or area and then receiving the beam coming back. As this camera can obtain the depth data in the pixel unit of CCD camera, it can be used for collecting depth data in real-time. This paper suggests to solve problems mentioned above by using depth data in real-time and also to design the obstacle avoidance system through distance measurement.

  • PDF

Driver Assistance System for Backward Motion Control of a Car with a Trailer (차량견인 트레일러의 후진제어를 위한 운전자 보조 시스템)

  • Roh, Jae-Il;Chung, Woo-Jin
    • The Journal of Korea Robotics Society
    • /
    • v.5 no.4
    • /
    • pp.286-293
    • /
    • 2010
  • The trailer system offers efficiency of transportation capability. However, it is difficult to control the backward motion. It is an open loop unstable problem. To solve this problem, we are proposed the driver assistance system. Driver assistance system assists a driver to control the backward motion of trailer system as if forward motion. A driver only secure the rear view of last passive trailer, and select the control input to drive the last passive trailer. The driver assistance system converts the control input of the driver into velocity and steering angle of the vehicle using the inverse kinematics. It is possible by electronic control input devices and the rear view camera. Effectiveness of driving assistance system is verified by the simulation and the experiments.

Development of a Camera-based Position Measurement System for the RTGC with Environment Conditions (실외 주행환경을 고려한 카메라 기반의 RTGC 위치계측시스템 개발)

  • Kawai, Hideki;Kim, Young-Bok;Choi, Yong-Woon
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.17 no.9
    • /
    • pp.892-896
    • /
    • 2011
  • This paper describes a camera-based position measurement system for automatic tracking control of a rubber Tired Gantry Crane (RTGC). An automatic tracking control of RTGC depends on the ability to measure its displacement and angle from a guide line that the RTGC has to follow. The measurement system proposed in this paper is composed of a camera and a PC that are mounted on the right upper between front and rear tires of the RTGC's side. The measurement accuracy of the system is affected by disturbances such as cracks and stains of the guide line, shadows, and halation from the light fluctuation. To overcome the disturbances, both side edges of the guide line are detected as two straight lines from an input image taken by the camera, and parameters of the straight lines are determined by using Hough transform. The displacement and angle of the RTGC from the guide line can be obtained from these parameters with the robustness against the disturbances. From the experiments with the disturbances, we found the accurate displacement and the angle from the guide line that have the standard deviations of 0.95 pixels and 0.22 degrees, respectively.

Real time Omni-directional Object Detection Using Background Subtraction of Fisheye Image (어안 이미지의 배경 제거 기법을 이용한 실시간 전방향 장애물 감지)

  • Choi, Yun-Won;Kwon, Kee-Koo;Kim, Jong-Hyo;Na, Kyung-Jin;Lee, Suk-Gyu
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.8
    • /
    • pp.766-772
    • /
    • 2015
  • This paper proposes an object detection method based on motion estimation using background subtraction in the fisheye images obtained through omni-directional camera mounted on the vehicle. Recently, most of the vehicles installed with rear camera as a standard option, as well as various camera systems for safety. However, differently from the conventional object detection using the image obtained from the camera, the embedded system installed in the vehicle is difficult to apply a complicated algorithm because of its inherent low processing performance. In general, the embedded system needs system-dependent algorithm because it has lower processing performance than the computer. In this paper, the location of object is estimated from the information of object's motion obtained by applying a background subtraction method which compares the previous frames with the current ones. The real-time detection performance of the proposed method for object detection is verified experimentally on embedded board by comparing the proposed algorithm with the object detection based on LKOF (Lucas-Kanade optical flow).

Optical Structural Design using Gaussian Optics for Multiscale Gigapixel Camera (상분할 방식의 기가픽셀 카메라를 위한 가우스 광학적인 구조설계)

  • Rim, Cheon-Seog
    • Korean Journal of Optics and Photonics
    • /
    • v.24 no.6
    • /
    • pp.311-317
    • /
    • 2013
  • It was reported in Nature and the Wall Street Journal on June 20th, 2012 that scientists at Duke university have developed a gigapixel camera, capable of over 1,000 times the resolution of a normal camera. According to the reports, this super-resolution camera was motivated by the need of US military authorities to surveil ground and sky. We notice the ripple effect of this technology has spread into the area of national defense and industry, so that this research has started to realize the super-resolution camera as a frontier research topic. As a result, we can understand the optical structure of a super-resolution camera's lens system to be composed of a front, monocentric objective of a single lens plus 98 rear, multiscale camera lenses. We can also obtain the numerical ranges of specification factors related to the optical structure, such as the diameter of the aperture, and the focal length.

Implementation of new gestures on the Multi-touch table

  • Park, Sang Bong;Kim, Beom jin
    • International Journal of Advanced Culture Technology
    • /
    • v.1 no.1
    • /
    • pp.15-18
    • /
    • 2013
  • This paper describes new gestures on the Multi-touch table. The 2 new gestures with 3 fingers are used for minimizing of all windows that is already open and converting Aero mode. We also implement a FTIR (Frustrated Total Internal Reflection) Multi-touch table that consists of sheet of acrylic, infrared LEDs, camera and rear projector. The operation of proposed gestures is verified on the implemented Multi-touch table.

  • PDF

The Running Control for the Mobile Vehicle

  • Sugisaka, Masanori;Adachi, Takuya
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.491-491
    • /
    • 2000
  • In this paper, we report the results about the rotational control count on DC motor to drive the mobile vehicle as a first step of the research for the realization of the mobile vehicle with the artificial brain. First of all, we introduce the configuration of the mobile vehicle. This mobile vehicle has one CCD camera driven by a rear wheel. Secondly we show the control methods. This research is adopted the various controls. Finally we report the experimental methods and results and we describe the conclusion of this research.

  • PDF

Tele-Operated Mobile Robot for Visual Inspection of a Reactor Head

  • Choi, Chang-Hwan;Jeong, Kyung-Min;Kim, Seung-Ho
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.2063-2065
    • /
    • 2003
  • The control rod drive mechanisms in a reactor head are arranged too narrow for a human worker to approach. Moreover, the working environment is in high radiation area. In order to inspect defections in the surfaces of the reactor head and welding parts, a visual inspection device that can approach such a narrow and high radiation area is required. This paper introduces a tele-operated mobile robot for visual inspection of a reactor head, which has pan/tilt camera, fixed rear camera, ultrasonic collision detection system, and so on. Moreover, the host controller and digital video logging system are developed and integrated control software is also developed. The robot is operated by a wireless control, which gives flexibility for the inspection.

  • PDF

Recognition of Car Manufacturers using Faster R-CNN and Perspective Transformation

  • Ansari, Israfil;Lee, Yeunghak;Jeong, Yunju;Shim, Jaechang
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.8
    • /
    • pp.888-896
    • /
    • 2018
  • In this paper, we report detection and recognition of vehicle logo from images captured from street CCTV. Image data includes both the front and rear view of the vehicles. The proposed method is a two-step process which combines image preprocessing and faster region-based convolutional neural network (R-CNN) for logo recognition. Without preprocessing, faster R-CNN accuracy is high only if the image quality is good. The proposed system is focusing on street CCTV camera where image quality is different from a front facing camera. Using perspective transformation the top view images are transformed into front view images. In this system, the detection and accuracy are much higher as compared to the existing algorithm. As a result of the experiment, on day data the detection and recognition rate is improved by 2% and night data, detection rate improved by 14%.