• Title/Summary/Keyword: Image Processing

Search Result 9,828, Processing Time 0.058 seconds

A Massively Parallel Algorithm for Fuzzy Vector Quantization (퍼지 벡터 양자화를 위한 대규모 병렬 알고리즘)

  • Huynh, Luong Van;Kim, Cheol-Hong;Kim, Jong-Myon
    • The KIPS Transactions:PartA
    • /
    • v.16A no.6
    • /
    • pp.411-418
    • /
    • 2009
  • Vector quantization algorithm based on fuzzy clustering has been widely used in the field of data compression since the use of fuzzy clustering analysis in the early stages of a vector quantization process can make this process less sensitive to its initialization. However, the process of fuzzy clustering is computationally very intensive because of its complex framework for the quantitative formulation of the uncertainty involved in the training vector space. To overcome the computational burden of the process, this paper introduces an array architecture for the implementation of fuzzy vector quantization (FVQ). The arrayarchitecture, which consists of 4,096 processing elements (PEs), provides a computationally efficient solution by employing an effective vector assignment strategy during the clustering process. Experimental results indicatethat the proposed parallel implementation providessignificantly greater performance and efficiency than appropriately scaled alternative array systems. In addition, the proposed parallel implementation provides 1000x greater performance and 100x higher energy efficiency than other implementations using today's ARMand TI DSP processors in the same 130nm technology. These results demonstrate that the proposed parallel implementation shows the potential for improved performance and energy efficiency.

A Study On Memory Optimization for Applying Deep Learning to PC (딥러닝을 PC에 적용하기 위한 메모리 최적화에 관한 연구)

  • Lee, Hee-Yeol;Lee, Seung-Ho
    • Journal of IKEEE
    • /
    • v.21 no.2
    • /
    • pp.136-141
    • /
    • 2017
  • In this paper, we propose an algorithm for memory optimization to apply deep learning to PC. The proposed algorithm minimizes the memory and computation processing time by reducing the amount of computation processing and data required in the conventional deep learning structure in a general PC. The algorithm proposed in this paper consists of three steps: a convolution layer configuration process using a random filter with discriminating power, a data reduction process using PCA, and a CNN structure creation using SVM. The learning process is not necessary in the convolution layer construction process using the discriminating random filter, thereby shortening the learning time of the overall deep learning. PCA reduces the amount of memory and computation throughput. The creation of the CNN structure using SVM maximizes the effect of reducing the amount of memory and computational throughput required. In order to evaluate the performance of the proposed algorithm, we experimented with Yale University's Extended Yale B face database. The results show that the algorithm proposed in this paper has a similar performance recognition rate compared with the existing CNN algorithm. And it was confirmed to be excellent. Based on the algorithm proposed in this paper, it is expected that a deep learning algorithm with many data and computation processes can be implemented in a general PC.

Reference Frame Memory Compression Using Selective Processing Unit Merging Method (선택적 수행블록 병합을 이용한 참조 영상 메모리 압축 기법)

  • Hong, Soon-Gi;Choe, Yoon-Sik;Kim, Yong-Goo
    • Journal of Broadcast Engineering
    • /
    • v.16 no.2
    • /
    • pp.339-349
    • /
    • 2011
  • IBDI (Internal Bit Depth Increase) is able to significantly improve the coding efficiency of high definition video compression by increasing the bit depth (or precision) of internal arithmetic operation. However the scheme also increases required internal memory for storing decoded reference frames and this can be significant for higher definition of video contents. So, the reference frame memory compression method is proposed to reduce such internal memory requirement. The reference memory compression is performed on 4x4 block called the processing unit to compress the decoded image using the correlation of nearby pixel values. This method has successively reduced the reference frame memory while preserving the coding efficiency of IBDI. However, additional information of each processing unit has to be stored also in internal memory, the amount of additional information could be a limitation of the effectiveness of memory compression scheme. To relax this limitation of previous memory compression scheme, we propose a selective merging-based reference frame memory compression algorithm, dramatically reducing the amount of additional information. Simulation results show that the proposed algorithm provides much smaller overhead than that of the previous algorithm while keeping the coding efficiency of IBDI.

2D-to-3D Stereoscopic conversion: Depth estimation in monoscopic soccer videos (단일 시점 축구 비디오의 3차원 영상 변환을 위한 깊이지도 생성 방법)

  • Ko, Jae-Seung;Kim, Young-Woo;Jung, Young-Ju;Kim, Chang-Ick
    • Journal of Broadcast Engineering
    • /
    • v.13 no.4
    • /
    • pp.427-439
    • /
    • 2008
  • This paper proposes a novel method to convert monoscopic soccer videos to stereoscopic videos. Through the soccer video analysis process, we detect shot boundaries and classify soccer frames into long shot or non-long shot. In the long shot case, the depth mapis generated relying on the size of the extracted ground region. For the non-long shot case, the shot is further partitioned into three types by considering the number of ground blocks and skin blocks which is obtained by a simple skin-color detection method. Then three different depth assignment methods are applied to each non-long shot types: 1) Depth estimation by object region extraction, 2) Foreground estimation by using the skin block and depth value computation by Gaussian function, and 3)the depth map generation for shots not containing the skin blocks. This depth assignment is followed by stereoscopic image generation. Subjective evaluation comparing generated depth maps and corresponding stereoscopic images indicate that the proposed algorithm can yield the sense of depth from a single view images.

Measurement of the Phase Fraction of Minor Precipitates in Ni Base Superalloys using Quantitative X-ray Diffraction Technique (정량 x-선 회절분석법을 이용한 니켈기 초내열합급내 미량석출물의 상분율 측정)

  • Kim, S.E.;Cho, C.C.;Hur, B.Y.;Na, Y.S.;Park, N.K.
    • Analytical Science and Technology
    • /
    • v.12 no.3
    • /
    • pp.235-242
    • /
    • 1999
  • It is impossible to measure the fraction of the precipitates which are neither plenty nor distiguishable on micrographs, using point counting method or image analyzer. In this study, phase fraction of sigma, carbide and boride which are important to mechanical properties of Ni base superalloy Udimet 720 has been measured using a quantitative X-ray diffraction technique combined with electrochemical extraction. The alloys had been exposed at $800^{\circ}C$ for various times up to 3000 hours to have a variation of the amount of the minor precipitates. The amount of sigma had increased exponentially with increasing exposure time up to 3000 hours before saturation. It can be argued that the finishing point of precipitation is around 5000 hours and maximum amount of sigma to be produced is about 5% in weight. The amounts of $M_{23}C_6$ and $M_3B_2$ were maintained constant at the level of 0.1~0.5% in weight, regardless of exposure time.

  • PDF

Hardware Design of High Performance In-loop Filter in HEVC Encoder for Ultra HD Video Processing in Real Time (UHD 영상의 실시간 처리를 위한 고성능 HEVC In-loop Filter 부호화기 하드웨어 설계)

  • Im, Jun-seong;Dennis, Gookyi;Ryoo, Kwang-ki
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2015.10a
    • /
    • pp.401-404
    • /
    • 2015
  • This paper proposes a high-performance in-loop filter in HEVC(High Efficiency Video Coding) encoder for Ultra HD video processing in real time. HEVC uses in-loop filter consisting of deblocking filter and SAO(Sample Adaptive Offset) to solve the problems of quantization error which causes image degradation. In the proposed in-loop filter encoder hardware architecture, the deblocking filter and SAO has a 2-level hybrid pipeline structure based on the $32{\times}32CTU$ to reduce the execution time. The deblocking filter is performed by 6-stage pipeline structure, and it supports minimization of memory access and simplification of reference memory structure using proposed efficient filtering order. Also The SAO is implemented by 2-statge pipeline for pixel classification and applying SAO parameters and it uses two three-layered parallel buffers to simplify pixel processing and reduce operation cycle. The proposed in-loop filter encoder architecture is designed by Verilog HDL, and implemented by 205K logic gates in TSMC 0.13um process. At 110MHz, the proposed in-loop filter encoder can support 4K Ultra HD video encoding at 30fps in realtime.

  • PDF

Distance measurement System from detected objects within Kinect depth sensor's field of view and its applications (키넥트 깊이 측정 센서의 가시 범위 내 감지된 사물의 거리 측정 시스템과 그 응용분야)

  • Niyonsaba, Eric;Jang, Jong-Wook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2017.05a
    • /
    • pp.279-282
    • /
    • 2017
  • Kinect depth sensor, a depth camera developed by Microsoft as a natural user interface for game appeared as a very useful tool in computer vision field. In this paper, due to kinect's depth sensor and its high frame rate, we developed a distance measurement system using Kinect camera to test it for unmanned vehicles which need vision systems to perceive the surrounding environment like human do in order to detect objects in their path. Therefore, kinect depth sensor is used to detect objects in its field of view and enhance the distance measurement system from objects to the vision sensor. Detected object is identified in accuracy way to determine if it is a real object or a pixel nose to reduce the processing time by ignoring pixels which are not a part of a real object. Using depth segmentation techniques along with Open CV library for image processing, we can identify present objects within Kinect camera's field of view and measure the distance from them to the sensor. Tests show promising results that this system can be used as well for autonomous vehicles equipped with low-cost range sensor, Kinect camera, for further processing depending on the application type when they reach a certain distance far from detected objects.

  • PDF

MCBP Neural Netwoek for Effcient Recognition of Tire Claddification Code (타이어 분류 코드의 효율적 인식을 위한 MCBP망)

  • Koo, Gun-Seo;O, Hae-Seok
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.2
    • /
    • pp.465-482
    • /
    • 1997
  • In this paper, we have studied on cinstructing code-recognition shstem by neural network according to a image process taking the DOT classification code stamped on tire surface.It happened to a few problems that characters distorted in edge by diffused reflection and two adjacent characters take the same label,even very sen- sitive to illumination ofr recognition the stamped them on tire.Thus,this paper would propose the algorithm for tire code under being cinscious of these properties and prove the algorithm drrciency with a simulation.Also,we have suggerted the MCBP network composing of multi-linked recognizers of dffcient identify the DOT code being tire classification code.The MCBP network extracts the projection balue for classifying each character's rdgion after taking out the prjection of each chracter's region on X,Y axis,processes each chracters by taking 7$\times$8 normalization.We have improved error rate 3% through the MCBP network and post-process comparing the DOT code Database. This approach has a accomplished that learming time get's improvenent at 60% and recognition rate has become to 95% from 90% than BckPropagation with including post- processing it has attained greate rates of entire of tire recoggnition at 98%.

  • PDF

Current Status of Hyperspectral Data Processing Techniques for Monitoring Coastal Waters (연안해역 모니터링을 위한 초분광영상 처리기법 현황)

  • Kim, Sun-Hwa;Yang, Chan-Su
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.18 no.1
    • /
    • pp.48-63
    • /
    • 2015
  • In this study, we introduce various hyperspectral data processing techniques for the monitoring of shallow and coastal waters to enlarge the application range and to improve the accuracy of the end results in Korea. Unlike land, more accurate atmospheric correction is needed in coastal region showing relatively low reflectance in visible wavelengths. Sun-glint which occurs due to a geometry of sun-sea surface-sensor is another issue for the data processing in the ocean application of hyperspectal imagery. After the preprocessing of the hyperspectral data, a semi-analytical algorithm based on a radiative transfer model and a spectral library can be used for bathymetry mapping in coastal area, type classification and status monitoring of benthos or substrate classification. In general, semi-analytical algorithms using spectral information obtained from hyperspectral imagey shows higher accuracy than an empirical method using multispectral data. The water depth and quality are constraint factors in the ocean application of optical data. Although a radiative transfer model suggests the theoretical limit of about 25m in depth for bathymetry and bottom classification, hyperspectral data have been used practically at depths of up to 10 m in shallow and coastal waters. It means we have to focus on the maximum depth of water and water quality conditions that affect the coastal applicability of hyperspectral data, and to define the spectral library of coastal waters to classify the types of benthos and substrates.

Arctic Sea Ice Motion Measurement Using Time-Series High-Resolution Optical Satellite Images and Feature Tracking Techniques (고해상도 시계열 광학 위성 영상과 특징점 추적 기법을 이용한 북극해 해빙 이동 탐지)

  • Hyun, Chang-Uk;Kim, Hyun-cheol
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1215-1227
    • /
    • 2018
  • Sea ice motion is an important factor for assessing change of sea ice because the motion affects to not only regional distribution of sea ice but also new ice growth and thickness of ice. This study presents an application of multi-temporal high-resolution optical satellites images obtained from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and Korea Multi-Purpose Satellite-3 (KOMPSAT-3) to measure sea ice motion using SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) feature tracking techniques. In order to use satellite images from two different sensors, spatial and radiometric resolution were adjusted during pre-processing steps, and then the feature tracking techniques were applied to the pre-processed images. The matched features extracted from the SIFT showed even distribution across whole image, however the matched features extracted from the SURF showed condensed distribution of features around boundary between ice and ocean, and this regionally biased distribution became more prominent in the matched features extracted from the ORB. The processing time of the feature tracking was decreased in order of SIFT, SURF and ORB techniques. Although number of the matched features from the ORB was decreased as 59.8% compared with the result from the SIFT, the processing time was decreased as 8.7% compared with the result from the SIFT, therefore the ORB technique is more suitable for fast measurement of sea ice motion.