• Title/Summary/Keyword: Color Image Data Processing

Search Result 269, Processing Time 0.025 seconds

Inspection System using CIELAB Color Space for the PCB Ball Pad with OSP Surface Finish (OSP 표면처리된 PCB 볼 패드용 CIELAB 색좌표 기반 검사 시스템)

  • Lee, Han-Ju;Kim, Chang-Seok
    • Journal of the Microelectronics and Packaging Society
    • /
    • v.22 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • We demonstrated an inspection system for detecting discoloration of PCB Cu ball pad with an OSP surface finish. Though the OSP surface finish has many advantages such as eco-friendly and low cost, however, it often shows a discoloration phenomenon due to a heating process. In this study, the discoloration was analyzed with device-independent CIELAB color space. First of all, the PCB samples were inspected with standard lamps and CCD camera. The measured data was processed with Labview program for detecting discoloration of Cu ball pad. From the original PCB sample image, the localized Cu ball pad image was selected to reduce the image size by the binarization and edge detection processes and it was also converted to device-independent CIELAB color space using $3{\times}3$ conversion matrix. Both acquisition time and false acceptance rate were significantly reduced with this proposed inspection system. In addition, $L^*$ and $b^*$ values of CIELAB color space were suitable for inspection of discoloration of Cu ball pad.

A Study on Data Management Systems for Spatial Assessments of Road Visibilities at Night (야간도로 시인성에 대한 공간적 평가를 위한 자료관리체계 연구)

  • Woo, Hee Sook;Kwon, Kwang Seok;Kim, Byung Guk;Yoon, Chun Joo;Kim, Young Rok
    • Journal of Korean Society for Geospatial Information Science
    • /
    • v.22 no.4
    • /
    • pp.107-115
    • /
    • 2014
  • Visibility of the road influence the safe driving because it recognizes the obstacle on the road. In this paper, we propose a mobile data acquisition and processing system for evaluating road visibility at night. And it was converted efficiently with mobile images and archived for spatial analysis of road-visibilities at night. This was applied to the following techniques to the system. Low-power computing units, open an image processing library, GPU-based acceleration techniques and document database techniques, etc. And converting the RGB image to the YUV color system, which was integrated the brightness component and the spatial information. High performance Android devices were used to collect brightness data on roads and it was confirmed whether this prototype was to determine the spatial distribution of such acquisition and management systems for spatial-assessments of road visibility at night.

Real-time Eye Contact System Using a Kinect Depth Camera for Realistic Telepresence (Kinect 깊이 카메라를 이용한 실감 원격 영상회의의 시선 맞춤 시스템)

  • Lee, Sang-Beom;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4C
    • /
    • pp.277-282
    • /
    • 2012
  • In this paper, we present a real-time eye contact system for realistic telepresence using a Kinect depth camera. In order to generate the eye contact image, we capture a pair of color and depth video. Then, the foreground single user is separated from the background. Since the raw depth data includes several types of noises, we perform a joint bilateral filtering method. We apply the discontinuity-adaptive depth filter to the filtered depth map to reduce the disocclusion area. From the color image and the preprocessed depth map, we construct a user mesh model at the virtual viewpoint. The entire system is implemented through GPU-based parallel programming for real-time processing. Experimental results have shown that the proposed eye contact system is efficient in realizing eye contact, providing the realistic telepresence.

Proposed TATI Model for Predicting the Traffic Accident Severity (교통사고 심각 정도 예측을 위한 TATI 모델 제안)

  • Choo, Min-Ji;Park, So-Hyun;Park, Young-Ho
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.8
    • /
    • pp.301-310
    • /
    • 2021
  • The TATI model is a Traffic Accident Text to RGB Image model, which is a methodology proposed in this paper for predicting the severity of traffic accidents. Traffic fatalities are decreasing every year, but they are among the low in the OECD members. Many studies have been conducted to reduce the death rate of traffic accidents, and among them, studies have been steadily conducted to reduce the incidence and mortality rate by predicting the severity of traffic accidents. In this regard, research has recently been active to predict the severity of traffic accidents by utilizing statistical models and deep learning models. In this paper, traffic accident dataset is converted to color images to predict the severity of traffic accidents, and this is done via CNN models. For performance comparison, we experiment that train the same data and compare the prediction results with the proposed model and other models. Through 10 experiments, we compare the accuracy and error range of four deep learning models. Experimental results show that the accuracy of the proposed model was the highest at 0.85, and the second lowest error range at 0.03 was shown to confirm the superiority of the performance.

A CPU-GPU Hybrid System of Environment Perception and 3D Terrain Reconstruction for Unmanned Ground Vehicle

  • Song, Wei;Zou, Shuanghui;Tian, Yifei;Sun, Su;Fong, Simon;Cho, Kyungeun;Qiu, Lvyang
    • Journal of Information Processing Systems
    • /
    • v.14 no.6
    • /
    • pp.1445-1456
    • /
    • 2018
  • Environment perception and three-dimensional (3D) reconstruction tasks are used to provide unmanned ground vehicle (UGV) with driving awareness interfaces. The speed of obstacle segmentation and surrounding terrain reconstruction crucially influences decision making in UGVs. To increase the processing speed of environment information analysis, we develop a CPU-GPU hybrid system of automatic environment perception and 3D terrain reconstruction based on the integration of multiple sensors. The system consists of three functional modules, namely, multi-sensor data collection and pre-processing, environment perception, and 3D reconstruction. To integrate individual datasets collected from different sensors, the pre-processing function registers the sensed LiDAR (light detection and ranging) point clouds, video sequences, and motion information into a global terrain model after filtering redundant and noise data according to the redundancy removal principle. In the environment perception module, the registered discrete points are clustered into ground surface and individual objects by using a ground segmentation method and a connected component labeling algorithm. The estimated ground surface and non-ground objects indicate the terrain to be traversed and obstacles in the environment, thus creating driving awareness. The 3D reconstruction module calibrates the projection matrix between the mounted LiDAR and cameras to map the local point clouds onto the captured video images. Texture meshes and color particle models are used to reconstruct the ground surface and objects of the 3D terrain model, respectively. To accelerate the proposed system, we apply the GPU parallel computation method to implement the applied computer graphics and image processing algorithms in parallel.

Discoloration of teeth due to different intracanal medicaments

  • Afkhami, Farzaneh;Elahy, Sadaf;Nahavandi, Alireza Mahmoudi;Kharazifard, Mohamad Javad;Sooratgar, Aidin
    • Restorative Dentistry and Endodontics
    • /
    • v.44 no.1
    • /
    • pp.10.1-10.11
    • /
    • 2019
  • Objectives: The objective of this study was to assess coronal discoloration induced by the following intracanal medicaments: calcium hydroxide (CH), a mixture of CH paste and chlorhexidine gel (CH/CHX), and triple antibiotic paste (3Mix). Materials and Methods: Seventy extracted single-canal teeth were selected. Access cavities were prepared and each canal was instrumented with a rotary ProTaper system. The specimens were randomly assigned to CH, CH/CHX, and 3Mix paste experimental groups (n = 20 each) or a control group (n = 10). Each experimental group was randomly divided into 2 subgroups (A and B). In subgroup A, medicaments were only applied to the root canals, while in subgroup B, the root canals were completely filled with medicaments and a cotton pellet dipped in medicament was also placed in the pulp chamber. Spectrophotometric readings were obtained from the mid-buccal surface of the tooth crowns immediately after placing the medicaments (T1) and at 1 week (T2), 1 month (T3), and 3 months (T4) after filling. The ${\Delta}E$ was then calculated. Data were analyzed using 2-way analysis of variance (ANOVA), 3-way ANOVA, and the $Scheff{\acute{e}}$ post hoc test. Results: The greatest color change (${\Delta}E$) was observed at 3 months (p < 0.0001) and in 3Mix subgroup B (p = 0.0057). No significant color change occurred in the CH (p = 0.7865) or CH/CHX (p = 0.1367) groups over time, but the 3Mix group showed a significant ${\Delta}E$ (p = 0.0164). Conclusion: Intracanal medicaments may induce tooth discoloration. Use of 3Mix must be short and it must be carefully applied only to the root canals; the access cavity should be thoroughly cleaned afterwards.

Interface of Tele-Task Operation for Automated Cultivation of Watermelon in Greenhouse

  • Kim, S.C.;Hwang, H.
    • Journal of Biosystems Engineering
    • /
    • v.28 no.6
    • /
    • pp.511-516
    • /
    • 2003
  • Computer vision technology has been utilized as one of the most powerful tools to automate various agricultural operations. Though it has demonstrated successful results in various applications, the current status of technology is still for behind the human's capability typically for the unstructured and variable task environment. In this paper, a man-machine interactive hybrid decision-making system which utilized a concept of tole-operation was proposed to overcome limitations of computer image processing and cognitive capability. Tasks of greenhouse watermelon cultivation such as pruning, watering, pesticide application, and harvest require identification of target object. Identifying water-melons including position data from the field image is very difficult because of the ambiguity among stems, leaves, shades. and fruits, especially when watermelon is covered partly by leaves or stems. Watermelon identification from the cultivation field image transmitted by wireless was selected to realize the proposed concept. The system was designed such that operator(farmer), computer, and machinery share their roles utilizing their maximum merits to accomplish given tasks successfully. And the developed system was composed of the image monitoring and task control module, wireless remote image acquisition and data transmission module, and man-machine interface module. Once task was selected from the task control and monitoring module, the analog signal of the color image of the field was captured and transmitted to the host computer using R.F. module by wireless. Operator communicated with computer through touch screen interface. And then a sequence of algorithms to identify the location and size of the watermelon was performed based on the local image processing. And the system showed practical and feasible way of automation for the volatile bio-production process.

Simulation of YUV-Aware Instructions for High-Performance, Low-Power Embedded Video Processors (고성능, 저전력 임베디드 비디오 프로세서를 위한 YUV 인식 명령어의 시뮬레이션)

  • Kim, Cheol-Hong;Kim, Jong-Myon
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.13 no.5
    • /
    • pp.252-259
    • /
    • 2007
  • With the rapid development of multimedia applications and wireless communication networks, consumer demand for video-over-wireless capability on mobile computing systems is growing rapidly. In this regard, this paper introduces YUV-aware instructions that enhance the performance and efficiency in the processing of color image and video. Traditional multimedia extensions (e.g., MMX, SSE, VIS, and AltiVec) depend solely on generic subword parallelism whereas the proposed YUV-aware instructions support parallel operations on two-packed 16-bit YUV (6-bit Y, 5-bits U, V) values in a 32-bit datapath architecture, providing greater concurrency and efficiency for color image and video processing. Moreover, the ability to reduce data format size reduces system cost. Experiment results on a representative dynamically scheduled embedded superscalar processor show that YUV-aware instructions achieve an average speedup of 3.9x over the baseline superscalar performance. This is in contrast to MMX (a representative Intel#s multimedia extension), which achieves a speedup of only 2.1x over the same baseline superscalar processor. In addition, YUV-aware instructions outperform MMX instructions in energy reduction (75.8% reduction with YUV-aware instructions, but only 54.8% reduction with MMX instructions over the baseline).

An Experimental Study on the Frequency Characteristics of Cloud Cavitation on Naval Ship Rudder (함정용 방향타에서 발생하는 구름(cloud) 캐비테이션의 주파수 특성에 대한 실험적 연구)

  • Paik, Bu-Geun;Ahn, Jong-Woo;Jeong, Hongseok;Seol, Hanshin;Song, Jae-Yeol;Ko, Yoon-Ho
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.58 no.3
    • /
    • pp.167-174
    • /
    • 2021
  • In this study, the amount and frequency characteristics of cloud cavitation formed on a navy ship rudder were investigated through cavitation image processing technique and cavitation noise analysis. A high-speed camera with high time resolution was used to observe the cavitation on a full-spade rudder. The deflection angle range of the full-spade rudder was set to 8 to 15 degrees so that cloud cavitation was generated on the rudder surface. For images taken at 104 fps (frame per second), reference values for detecting cavitation were defined and detected in Red, Green, Blue and Hue, Saturation, Lightness color spaces to quantitatively analyze the amount of cavitation. Intrinsic frequency characteristics of cloud cavitation were detected from the time series data of the amount of cavitation. The frequency characteristics of cloud cavitation obtained by using the image processing technique were found to be the same through the analysis of the noise signal measured by the hydrophone installed on the hull above the rudder, and its peak value was in the frequency band of 30~60Hz.

Face Region Extraction using Object Unit Method (객체 단위 방법을 사용한 얼굴 영역 추출)

  • 선영범;김진태;김동욱;이원형
    • Journal of Korea Multimedia Society
    • /
    • v.6 no.6
    • /
    • pp.953-961
    • /
    • 2003
  • This paper suggests an efficient method to extract face regions from the com]]lex background. Input image is transformed to color space, where the data is independent of the brightness and several regions are extracted by skin color information. Each extracted region is processed as an object. Noise and overlapped objects ate removed. The candidate objects, faces are likely to be included in, are selected by checking the sizes of extracted objects, the XY ratio, and the distribution ratio of skin colors. In this processing, the objects without face are excluded out of candidate regions. The proposed method can be applied for successful extraction of face regions under various conditions such as face extraction with complex background, slanted faces, and face with accessories, etc.

  • PDF