• Title/Summary/Keyword: camera image

Search Result 4,922, Processing Time 0.032 seconds

Recognition of Lanes, Stop Lines and Speed Bumps using Top-view Images (탑뷰 영상을 이용한 차선, 정지선 및 과속방지턱 인식)

  • Ahn, Young-Sun;Kwak, Seong Woo;Yang, Jung-Min
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.65 no.11
    • /
    • pp.1879-1886
    • /
    • 2016
  • In this paper, we propose a real-time recognition algorithm of lanes, stop lines and speed bumps on roads for autonomous vehicles. First, we generate a top-view using the image transmitted from a camera that is installed to see the front of a vehicle. To speed up the processing, we simplify the mapping algorithm in constructing a top-view wherein the region of interest (ROI) is concerned. The features of lanes, stop lines and speed bumps, which are composed of lines, are searched in the edge image of the top-view, then followed by labeling and clustering specialized to detect straight lines. The width of lines, distances from the center of a vehicle, and curvature of each cluster are considered to select final candidates. We verify the proposed algorithm on real roads using the commercial car (KIA K7) which is converted into an autonomous vehicle.

A Study on Color Management of Input and Output Device in Electronic Publishing (II) (전자출판에서 입.출력 장치의 컬러 관리에 관한 연구 (II))

  • Cho, Ga-Ram;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.25 no.1
    • /
    • pp.65-80
    • /
    • 2007
  • The input and output device requires precise color representation and CMS (Color Management System) because of the increasing number of ways to apply the digital image into electronic publishing. However, there are slight differences in the device dependent color signal among the input and output devices. Also, because of the non-linear conversion of the input signal value to the output signal value, there are color differences between the original copy and the output copy. It seems necessary for device-dependent color information values to change into device-independent color information values. When creating an original copy through electronic publishing, there should be color management with the input and output devices. From the devices' three phases of calibration, characterization and color conversion, the device-dependent color should undergo a color transformation into a device-independent color. In this paper, an experiment was done where the input device used the linear multiple regression and the sRGB color space to perform a color transformation. The output device used the GOG, GOGO and sRGB for the color transformation. After undergoing a color transformation in the input and output devices, the best results were created when the original target underwent a color transformation by the scanner and digital camera input device by the linear multiple regression, and the LCD output device underwent a color transformation by the GOG model.

  • PDF

A Real-Time Control of SCARA Robot Based Image Feedback (이미지 피드백에 의한 스카라 로봇의 실시간 제어)

  • Lee, Woo-Song;Koo, Young-Mok;Shim, Hyun-Seok;Lee, Sang-Hoon;Kim, Dong-Yeop
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.17 no.2
    • /
    • pp.54-60
    • /
    • 2014
  • The equipment of SCARA robot in processing and assembly lines has rapidly increased. In order to achieve high productivity and flexibility, it becomes very important to develop the visual feedback control system with Off-Line Programming System(OLPS). We can save much efforts and time in adjusting robots to newly defined workcells by using OLPS. A proposed visual calibration scheme is based on position-based visual feedback. The calibration program firstly generates predicted images of objects in an assumed end-effector position. The process to generate predicted images consists of projection to screen-coordinates, visible range test, and construction of simple silhouette figures. Then, camera images acquired are compared with predicted ones for updating position and orientation data. Computation of error is very simple because the scheme is based on perspective projection, which can be also expanded to experimental results. Computation time can be extremely reduced because the proposed method does not requirethe precise calculation of tree-dimensional object data and image Jacobian.

A Study on the Optimization of color in Digital Printing (디지털 인쇄에 있어서 컬러의 최적화에 관한 연구)

  • Kim, Jae-Hae;Lee, Sung-Hyung;Cho, Ga-Ram;Koo, Chul-Whoi
    • Journal of the Korean Graphic Arts Communication Society
    • /
    • v.26 no.1
    • /
    • pp.51-64
    • /
    • 2008
  • In this paper, an experiment was done where the input(scanner, digital still camera) and monitor(CRT, LCD) device used the linear multiple regression and the GOG (Gain-Offset-Gamma) characterization model to perform a color transformation. Also to color conversion method of the digital printer it used the LUT(Look Up Table), 3dimension linear interpolation and a tetrahedron interpolation method. The results are as follows. From color reappearance of digital printing case of monitor, the XYZ which it converts in linear multiple regression of input device it multiplied the inverse matrix, and then it applies the inverse GOG model and after color converting the patch of the result most which showed color difference below 5 at monitor RGB value. Also, The XYZ which is transmitted from the case input device which is a printer it makes at LAB value to convert an extreme, when the LAB value which is converted calculating the CMY with the LUT and tetrahedral interpolations the color conversion which considers the black quantity was more accurate.

  • PDF

Advanced JPEG bit rate control for the mobile multimedia device (이동형 멀티미디어 기기를 위한 개선된 JPEG 비트율 조절 알고리즘)

  • Yang, Yoon-Gi;Lee, Chang-Su;Kim, Jin-Yul
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.5
    • /
    • pp.579-587
    • /
    • 2008
  • Typically, the file sizes of JPEG compressed images with various complexity differ from images regardless of same image size. So, it is not easy to estimate the remaining image counts that should be stored in the limited storage equipped with the digital camera. To solve the problem, the bit rate control employs the modification of quantization table. The previous work assumed that there is linear relation between image activity and modification factor of quantization table, but in this paper, more accurate functional relations based on statistics are employed to improve the bit rate control accuracy. Computer simulations reveals that the standard deviation of the bit rate error of the proposed scheme is 50% less than that of the conventional method.

  • PDF

A Recognition Method of Container ISO-code for Vision & Information System in Harbors (항만 영상정보시스템 구축을 위한 컨테이너 식별자 인식)

  • Koo, Kyung-Mo;Cha, Eui-Young
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.06a
    • /
    • pp.721-723
    • /
    • 2007
  • Recently, the size and location of the acquired container image while the container is loading and unloading in Harbors is not fixed. And it is difficult to get a good image for recognition because of the variation of external environment as those the size of container and where the yard-tractor stop is. In this paper, we estimate where the container ISO-code set is using Top-hat transform from realtime images and get an image to recognize container ISO-code using PAN/TILT/ZOOM camera. We extract the container ISO-code using Top-hat transform and Histogram projection. After binarization, we extract each character from complex background using labeling. We use BP(Backpropagation Network) to recognize extracted characters.

  • PDF

Automatic 3D Facial Movement Detection from Mirror-reflected Multi-Image for Facial Expression Modeling (거울 투영 이미지를 이용한 3D 얼굴 표정 변화 자동 검출 및 모델링)

  • Kyung, Kyu-Min;Park, Mignon;Hyun, Chang-Ho
    • Proceedings of the KIEE Conference
    • /
    • 2005.05a
    • /
    • pp.113-115
    • /
    • 2005
  • This thesis presents a method for 3D modeling of facial expression from frontal and mirror-reflected multi-image. Since the proposed system uses only one camera, two mirrors, and simple mirror's property, it is robust, accurate and inexpensive. In addition, we can avoid the problem of synchronization between data among different cameras. Mirrors located near one's cheeks can reflect the side views of markers on one's face. To optimize our system, we must select feature points of face intimately associated with human's emotions. Therefore we refer to the FDP (Facial Definition Parameters) and FAP (Facial Animation Parameters) defined by MPEG-4 SNHC (Synlhetic/Natural Hybrid Coding). We put colorful dot markers on selected feature points of face to detect movement of facial deformation when subject makes variety expressions. Before computing the 3D coordinates of extracted facial feature points, we properly grouped these points according to relative part. This makes our matching process automatically. We experiment on about twenty koreans the subject of our experiment in their late twenties and early thirties. Finally, we verify the performance of the proposed method tv simulating an animation of 3D facial expression.

  • PDF

Header Data Interpreting S/W Design for MSC(Multi-Spectral Camera) image data

  • Kong Jong-Pil;Heo Haeng-Pal;Kim YoungSun;Park Jong-Euk;Youn Heong-Sik
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.436-439
    • /
    • 2004
  • Output data streams of the MSC contain flags, Headers and image data according to the established protocols and data formats. Especially the Header added to each data lines contain information of a line sync, a line counter and, ancillary data which consist of ancillary identification bit and one ancillary data byte. This information is used by ground station to calculate the geographic coordinates of the image and get the on-board time and several EOS(Electro-Optical Subsystem) parameters used at the time of imaging. Therefore, the EGSE(Electrical Ground Supporting Equipment) that is used for testing MSC has to have functions of interpreting and displaying this Header information correctly following the protocols. This paper describes the design of the header data processing module which is in EOS­EGSE. This module provides users with various test functions such as header validation, ancillary block validation, line-counter and In-line counter validation checks which allow convenient and fast test on imagery data.

  • PDF

A Study on a Visual Sensor System for Weld Seam Tracking in Robotic GMA Welding (GMA 용접로봇용 용접선 시각 추적 시스템에 관한 연구)

  • 김재웅;김동호
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2000.11a
    • /
    • pp.643-646
    • /
    • 2000
  • In this study, we constructed a preview-sensing visual sensor system for weld seam tracking in real time in GMA welding. A sensor part consists of a CCD camera, a band-pass filter, a diode laser system with a cylindrical lens, and a vision board for inter frame process. We used a commercialized robot system which includes a GMA welding machine. To extract the weld seam we used a inter frame process in vision board from that we could remove the noise due to the spatters and fume in the image. Since the image was very reasonable by using the inter frame process, we could use the simplest way to extract the weld seam from the image, such as first differential and central difference method. Also we used a moving average method to the successive position data of weld seam for reducing the data fluctuation. In experiment the developed robot system with visual sensor could be able to track a most popular weld seam, such as a fillet-joint, a V-groove, and a lap-joint of which weld seam include planar and height directional variation.

  • PDF

Implementation of Real-Time Video Transfer System on Android Environment (안드로이드 기반의 실시간 영상전송시스템의 구현)

  • Lee, Kang-Hun;Kim, Dong-Il;Kim, Dae-Ho;Sung, Myung-Yoon;Lee, Young-Kil;Jung, Suk-Yong
    • Journal of the Korea Convergence Society
    • /
    • v.3 no.1
    • /
    • pp.1-5
    • /
    • 2012
  • In this paper, we developed real-tim video transfer system based on Android environment. After android device with embedded camera capture images, it sends image frames to video server system. And also video server transfer the images from client to peer client. Peer client also implemented on android environment. We can send 16 image frames per second without any loss in 3G mobile network environment.