• Title/Summary/Keyword: Image Edge

Search Result 2,464, Processing Time 0.037 seconds

Adaptive Vehicle License Plate Recognition System Using Projected Plane Convolution and Decision Tree Classifier (투영면 컨벌루션과 결정트리를 이용한 상태 적응적 차량번호판 인식 시스템)

  • Lee Eung-Joo;Lee Su Hyun;Kim Sung-Jin
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.11
    • /
    • pp.1496-1509
    • /
    • 2005
  • In this paper, an adaptive license plate recognition system which detects and recognizes license plate at real-time by using projected plane convolution and Decision Tree Classifier is proposed. And it was tested in circumstances which presence of complex background. Generally, in expressway tollgate or gateway of parking lots, it is very difficult to detect and segment license plate because of size, entry angle and noisy problem of vehicles due to CCD camera and road environment. In the proposed algorithm, we suggested to extract license plate candidate region after going through image acquisition process with inputted real-time image, and then to compensate license size as well as gradient of vehicle with change of vehicle entry position. The proposed algorithm can exactly detect license plate using accumulated edge, projected convolution and chain code labeling method. And it also segments letter of license plate using adaptive binary method. And then, it recognizes license plate letter by applying hybrid pattern vector method. Experimental results show that the proposed algorithm can recognize the front and rear direction license plate at real-time in the presence of complex background environments. Accordingly license plate detection rate displayed $98.8\%$ and $96.5\%$ successive rate respectively. And also, from the segmented letters, it shows $97.3\%$ and $96\%$ successive recognition rate respectively.

  • PDF

DEVELOPMENT OF AN AMPHIBIOUS ROBOT FOR VISUAL INSPECTION OF APR1400 NPP IRWST STRAINER ASSEMBLY

  • Jang, You Hyun;Kim, Jong Seog
    • Nuclear Engineering and Technology
    • /
    • v.46 no.3
    • /
    • pp.439-446
    • /
    • 2014
  • An amphibious inspection robot system (hereafter AIROS) is being developed to visually inspect the in-containment refueling storage water tank (hereafter IRWST) strainer in APR1400 instead of a human diver. Four IRWST strainers are located in the IRWST, which is filled with boric acid water. Each strainer has 108 sub-assembly strainer fin modules that should be inspected with the VT-3 method according to Reg. guide 1.82 and the operation manual. AIROS has 6 thrusters for submarine voyage and 4 legs for walking on the top of the strainer. An inverse kinematic algorithm was implemented in the robot controller for exact walking on the top of the IRWST strainer. The IRWST strainer has several top cross braces that are extruded on the top of the strainer, which can be obstacles of walking on the strainer, to maintain the frame of the strainer. Therefore, a robot leg should arrive at the position beside the top cross brace. For this reason, we used an image processing technique to find the top cross brace in the sole camera image. The sole camera image is processed to find the existence of the top cross brace using the cross edge detection algorithm in real time. A 5-DOF robot arm that has multiple camera modules for simultaneous inspection of both sides can penetrate narrow gaps. For intuitive presentation of inspection results and for management of inspection data, inspection images are stored in the control PC with camera angles and positions to synthesize and merge the images. The synthesized images are then mapped in a 3D CAD model of the IRWST strainer with the location information. An IRWST strainer mock-up was fabricated to teach the robot arm scanning and gaiting. It is important to arrive at the designated position for inserting the robot arm into all of the gaps. Exact position control without anchor under the water is not easy. Therefore, we designed the multi leg robot for the role of anchoring and positioning. Quadruped robot design of installing sole cameras was a new approach for the exact and stable position control on the IRWST strainer, unlike a traditional robot for underwater facility inspection. The developed robot will be practically used to enhance the efficiency and reliability of the inspection of nuclear power plant components.

Optimization Design in Time Domain on Impulse GPIR System (임펄스 GPIR시스템의 시간영역 최적화 설계)

  • Kim, Kwan-Ho;Park, Young-Jin;Yoon, Young-Joong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.3
    • /
    • pp.32-39
    • /
    • 2009
  • In this paper, system optimization design technique of an impulse ground penetrating image radar (GPIR) in time domain is proposed to improve depth resolution of the system. For the purpose, time domain analysis method of key components such as impulse generator and UWB antenna is explained and by simulation, parameters of each component are determined. In particular, by standardizing the impulse signal, spectrum efficiency of a radiated impulse signal is improved and a U-shaped planar dipole antenna for a UWB antenna is developed. By equipping a parabolic metal reflector with the proposed antenna, external noise is prevented and the ability of radiating an input impulse into ground is improved. In addition, to remove ringing effect of the propose antenna which causes serious degradation of the system performance, resistors are loaded at the edge of the antenna and then Tx and Rx UWB antennas are optimized by simulation in time domain. For images of targets buried under the ground migration technique is applied and influence of tough ground surface on distortion of received impulse signals is reduced using technique of noise and signal distortion reduction in time domain and its time resolution is enhanced. To verify the design optimization procedure, a prototype of an GPIR and an artificial test field are made. Measurement results show that the resolution of the system designed is as good as that of a theoretical model.

Automated Algorithm for Super Resolution(SR) using Satellite Images (위성영상을 이용한 Super Resolution(SR)을 위한 자동화 알고리즘)

  • Lee, S-Ra-El;Ko, Kyung-Sik;Park, Jong-Won
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.2
    • /
    • pp.209-216
    • /
    • 2018
  • High-resolution satellite imagery is used in diverse fields such as meteorological observation, topography observation, remote sensing (RS), military facility monitoring and protection of cultural heritage. In satellite imagery, low-resolution imagery can take place depending on the conditions of hardware (e.g., optical system, satellite operation altitude, image sensor, etc.) even though the images were obtained from the same satellite imaging system. Once a satellite is launched, the adjustment of the imaging system cannot be done to improve the resolution of the degraded images. Therefore, there should be a way to improve resolution, using the satellite imagery. In this study, a super resolution (SR) algorithm was adopted to improve resolution, using such low-resolution satellite imagery. The SR algorithm is an algorithm which enhances image resolution by matching multiple low-resolution images. In satellite imagery, however, it is difficult to get several images on the same region. To take care of this problem, this study performed the SR algorithm by calibrating geometric changes on images after applying automatic extraction of feature points and projection transform. As a result, a clear edge was found just like the SR results in which feature points were manually obtained.

Object Extraction Technique using Extension Search Algorithm based on Bidirectional Stereo Matching (양방향 스테레오 정합 기반 확장탐색 알고리즘을 이용한 물체추출 기법)

  • Choi, Young-Seok;Kim, Seung-Geun;Kang, Hyun-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.2
    • /
    • pp.1-9
    • /
    • 2008
  • In this paper, to extract object regions in stereo image, we propose an enhanced algorithm that extracts objects combining both of brightness information and disparity information. The approach that extracts objects using both has been studied by Ping and Chaohui. In their algorithm, the segmentation for an input image is carried out using the brightness, and integration of segmented regions in consideration of disparity information within the previously segmented regions. In the regions where the brightness values between object regions and background regions are similar, however, the segmented regions probably include both of object regions and background regions. It may cause incorrect object extraction in the merging process executed in the unit of the segmented region. To solve this problem, in proposed method, we adopt the merging process which is performed in pixel unit. In addition, we perform the bi-directional stereo matching process to enhance reliability of the disparity information and supplement the disparity information resulted from a single directional matching process. Further searching for disparity is decided by edge information of the input image. The proposed method gives good performance in the object extraction since we find the disparity information that is not extracted in the traditional methods. Finally, we evaluate our method by experiments for the pictures acquired from a real stereoscopic camera.

RCS Extraction of Trihedral Corner Reflector for SAR Image Calibration (SAR 영상 보정용 삼각 전파 반사기의 정확한 RCS 추출)

  • Kwon, Soon-Gu;Yoon, Ji-Hyeong;Oh, Yi-Sok
    • The Journal of Korean Institute of Electromagnetic Engineering and Science
    • /
    • v.21 no.9
    • /
    • pp.979-986
    • /
    • 2010
  • This paper presents an algorithm for retrieving precise radar cross sections(RCS) of various trihedral corner reflectors (TCR) which are external calibrators of synthetic aperture radar(SAR) systems. The theoretical RCSs of the TCRs are computed based on the physical optics(PO), geometrical optics(GO), and physical theory of diffraction(PTD) techniques; that is, the RCS computation includes the single reflections(PO), double reflections(GO-PO), triple reflections(GO-GO-PO), and edge diffractions(PTD) from the TCR. At first, we acquire an SAR image of the area that five TCRs installed in, and then extract the RCS of the TCRs. The RCSs of the TCRs are extracted accurately from the SAR image by adding up the power spill, which is generated due to the radar IRF(Impulse Response Function), using a square window. We compare the extracted RCSs with the theoretical RCSs and analyze the difference between the theoretical and experimental RCSs of the TCR for various window sizes and various backscattering coefficient levels of the adjacent area. Finally, we propose the minimum size of the integration area and the maximum level of the backscattering coefficients for the adjacent area.

An Evaluation Method of X-ray Imaging System Resolution for Non-Engineers (비공학도를 위한 X-ray 영상촬영 시스템 해상력 평가 방법)

  • Woo, Jung-Eun;Lee, Yong-Geum;Bae, Seok-Hwan;Kim, Yong-Gwon
    • Journal of radiological science and technology
    • /
    • v.35 no.4
    • /
    • pp.309-314
    • /
    • 2012
  • Nowadays, digital Radiography (DR) systems are widely used in clinical sites and substitute the analog-film x-ray imaging systems. The resolution of DR images depends on several factors such as characteristic contrast and motion of the object, the focal spot size and the quality of x-ray beam, x-ray scattering, the performance of the DR detector (x-ray conversion efficiency, the intrinsic resolution). The DR detector is composed of an x-ray capturing element, a coupling element and a collecting element, which systematically affect the system resolution. Generally speaking, the resolution of a medical imaging system is the discrimination ability of anatomical structures. Modulation transfer function (MTF) is widely used for the quantification of the resolution performance for an imaging system. MTF is defined as the frequency response of the imaging system to the input of a point spread function and can be obtained by doing Fourier transform of a line spread function, which is extracted from a test image. In clinic, radiologic technologists, who are in charge of system maintenance and quality control, have to evaluate or make routine check on their imaging system. However, it is not an easy task for the radiologic technologists to measure MTF accurately due to lack of their engineering and mathematical backgrounds. The objective of this study is to develop and provide for radiologic technologists a medical system imaging evaluation tool, so that they can measure and quantify system performance easily.

Verification of Spatial Resolution in DMC Imagery using Bar Target (Bar 타겟을 이용한 DMC 영상의 공간해상력 검증)

  • Lee, Tae Yun;Lee, Jae One;Yun, Bu Yeol
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.30 no.5
    • /
    • pp.485-492
    • /
    • 2012
  • Today, a digital airborne imaging sensor plays an important role in construction of the numerous National Spatial Data Infrastructure. However, an appropriate quality assesment procedure for the acquired digital images should be preceded to make them useful data with high precision and reliability. A lot of studies therefore have been conducted in attempt to assess quality of digital images at home and abroad. In this regard, many test fields have been already established and operated to calibrate digital photogrammetric airborne imaging systems in Europe and America. These test fields contain not only GCPs(Ground Control Points) to test geometric performance of a digital camera but also various types of targets to evaluate its spatial and radiometric resolution. The purpose of this paper is to present a method to verify the spatial resolution of the Intergraph DMC digital camera and its results based on an experimental field testing. In field test, a simple bar target to be easily identified in image is used to check the spatial resolution. Images, theoretically designed to 12cm GSD(Ground Sample Distance), were used to calculate the actual resolution for all sub-images and virtual images in flight direction as well as in cross flight direction. The results showed that the actual image resolution was about 0.6cm worse than theoretically expected resolution. In addition, the greatest difference of 1.5cm between them was found in the image of block edge.

Robust Vision Based Algorithm for Accident Detection of Crossroad (교차로 사고감지를 위한 강건한 비젼기반 알고리즘)

  • Jeong, Sung-Hwan;Lee, Joon-Whoan
    • The KIPS Transactions:PartB
    • /
    • v.18B no.3
    • /
    • pp.117-130
    • /
    • 2011
  • The purpose of this study is to produce a better way to detect crossroad accidents, which involves an efficient method to produce background images in consideration of object movement and preserve/demonstrate the candidate accident region. One of the prior studies proposed an employment of traffic signal interval within crossroad to detect accidents on crossroad, but it may cause a failure to detect unwanted accidents if any object is covered on an accident site. This study adopted inverse perspective mapping to control the scale of object, and proposed different ways such as producing robust background images enough to resist surrounding noise, generating candidate accident regions through information on object movement, and by using edge information to preserve and delete the candidate accident region. In order to measure the performance of proposed algorithm, a variety of traffic images were saved and used for experiment (e.g. recorded images on rush hours via DVR installed on crossroad, different accident images recorded in day and night rainy days, and recorded images including surrounding noise of lighting and shades). As a result, it was found that there were all 20 experiment cases of accident detected and actual effective rate of accident detection amounted to 76.9% on average. In addition, the image processing rate ranged from 10~14 frame/sec depending on the area of detection region. Thus, it is concluded that there will be no problem in real-time image processing.

Moving Object Segmentation using Space-oriented Object Boundary Linking and Background Registration (공간기반 객체 외곽선 연결과 배경 저장을 사용한 움직이는 객체 분할)

  • Lee Ho Suk
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.2
    • /
    • pp.128-139
    • /
    • 2005
  • Moving object boundary is very important for moving object segmentation. But the moving object boundary shows broken boundary We invent a novel space-oriented boundary linking algorithm to link the broken boundary The boundary linking algorithm forms a quadrant around the terminating pixel in the broken boundary and searches forward other terminating pixel to link within a radius. The boundary linking algorithm guarantees shortest distance linking. We also register the background from image sequence. We construct two object masks, one from the result of boundary linking and the other from the registered background, and use these two complementary object masks together for moving object segmentation. We also suppress the moving cast shadow using Roberts gradient operator. The major advantages of the proposed algorithms are more accurate moving object segmentation and the segmentation of the object which has holes in its region using these two object masks. We experiment the algorithms using the standard MPEG-4 test sequences and real video sequence. The proposed algorithms are very efficient and can process QCIF image more than 48 fps and CIF image more than 19 fps using a 2.0GHz Pentium-4 computer.