• Title/Summary/Keyword: color images

Search Result 2,715, Processing Time 0.026 seconds

Analyzing Human's Motion Pattern Using Sensor Fusion in Complex Spatial Environments (복잡행동환경에서의 센서융합기반 행동패턴 분석)

  • Tark, Han-Ho;Jin, Taeseok
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.6
    • /
    • pp.597-602
    • /
    • 2014
  • We propose hybrid-sensing system for human tracking. This system uses laser scanners and image sensors and is applicable to wide and crowded area such as hallway of university. Concretely, human tracking using laser scanners is at base and image sensors are used for human identification when laser scanners lose persons by occlusion, entering room or going up stairs. We developed the method of human identification for this system. Our method is following: 1. Best-shot images (human images which show human feature clearly) are obtained by the help of human position and direction data obtained by laser scanners. 2. Human identification is conducted by calculating the correlation between the color histograms of best-shot images. It becomes possible to conduct human identification even in crowded scenes by estimating best-shot images. In the experiment in the station, some effectiveness of this method became clear.

The Object Image Detection Method using statistical properties (통계적 특성에 의한 객체 영상 검출방안)

  • Kim, Ji-hong
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.7
    • /
    • pp.956-962
    • /
    • 2018
  • As the study of the object feature detection from image, we explain methods to identify the species of the tree in forest using the picture taken from dron. Generally there are three kinds of methods, which are GLCM (Gray Level Co-occurrence Matrix) and Gabor filters, in order to extract the object features. We proposed the object extraction method using the statistical properties of trees in this research because of the similarity of the leaves. After we extract the sample images from the original images, we detect the objects using cross correlation techniques between the original image and sample images. Through this experiment, we realized the mean value and standard deviation of the sample images is very important factor to identify the object. The analysis of the color component of the RGB model and HSV model is also used to identify the object.

Performance Improvement of Tone Compression of HDR Images and Qualitative Evaluations using a Modified iCAM06 Technique (Modified iCAM06 기법을 이용한 HDR 영상의 tone compression 개선과 평가)

  • Jang, Jae-Hoon;Lee, Sung-Hak;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.8
    • /
    • pp.1055-1065
    • /
    • 2009
  • High-dynamic-range (HDR) rendering technology changes the range from the broad dynamic range (up to 9 log units) of a luminance, in a real-world scene, to the 8-bit dynamic range which is the common output of a display's dynamic range. One of the techniques, iCAM06 has a superior capacity for making HDR images. iCAM06 is capable of making color appearance predictions of HDR images based on CIECAM02 and incorporating spatial process models in the human visual system (HVS) for contrast enhancement. However there are several problems in the iCAM06, including obscure user controllable factors to be decided. These factors have a serious effect on the output image but users get into difficulty in that they can't find an adequate solution on how to adjust. So a suggested model gives a quantitative formulation for user controllable factors of iCAM06 to find suitable values which corresponds with different viewing conditions, and improves subjective visuality of displayed images for varying illuminations.

  • PDF

Semantic Cue based Image Classification using Object Salient Point Modeling (객체 특징점 모델링을 이용한 시멘틱 단서 기반 영상 분류)

  • Park, Sang-Hyuk;Byun, Hye-Ran
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.1
    • /
    • pp.85-89
    • /
    • 2010
  • Most images are composed as union of the various objects which can describe meaning respectively. Unlike human perception, The general computer systems used for image processing analyze images based on low level features like color, texture and shape. The semantic gap between low level image features and the richness of user semantic knowledges can bring about unsatisfactory classification results from user expectation. In order to deal with this problem, we propose a semantic cue based image classification method using salient points from object of interest. Salient points are used to extract low level features from images and to link high level semantic concepts, and they represent distinct semantic information. The proposed algorithm can reduce semantic gap using salient points modeling which are used for image classification like human perception. and also it can improve classification accuracy of natural images according to their semantic concept relative to certain object information by using salient points. The experimental result shows both a high efficiency of the proposed methods and a good performance.

Development of PKNU3: A small-format, multi-spectral, aerial photographic system

  • Lee Eun-Khung;Choi Chul-Uong;Suh Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.20 no.5
    • /
    • pp.337-351
    • /
    • 2004
  • Our laboratory originally developed the compact, multi-spectral, automatic aerial photographic system PKNU3 to allow greater flexibility in geological and environmental data collection. We are currently developing the PKNU3 system, which consists of a color-infrared spectral camera capable of simultaneous photography in the visible and near-infrared bands; a thermal infrared camera; two computers, each with an 80-gigabyte memory capacity for storing images; an MPEG board that can compress and transfer data to the computers in real-time; and the capability of using a helicopter platform. Before actual aerial photographic testing of the PKNU3, we experimented with each sensor. We analyzed the lens distortion, the sensitivity of the CCD in each band, and the thermal response of the thermal infrared sensor before the aerial photographing. As of September 2004, the PKNU3 development schedule has reached the second phase of testing. As the result of two aerial photographic tests, R, G, B and IR images were taken simultaneously; and images with an overlap rate of 70% using the automatic 1-s interval data recording time could be obtained by PKNU3. Further study is warranted to enhance the system with the addition of gyroscopic and IMU units. We evaluated the PKNU 3 system as a method of environmental remote sensing by comparing each chlorophyll image derived from PKNU 3 photographs. This appraisement was backed up with existing study that resulted in a modest improvement in the linear fit between the measures of chlorophyll and the RVI, NDVI and SAVI images stem from photographs taken by Duncantech MS 3100 which has same spectral configuration with MS 4000 used in PKNU3 system.

Comparison of Clinical Characteristics of Fluorescence in Quantitative Light-Induced Fluorescence Images according to the Maturation Level of Dental Plaque

  • Jung, Eun-Ha;Oh, Hye-Young
    • Journal of dental hygiene science
    • /
    • v.21 no.4
    • /
    • pp.219-226
    • /
    • 2021
  • Background: Proper detection and management of dental plaque are essential for individual oral health. We aimed to evaluate the maturation level of dental plaque using a two-tone disclosing agent and to compare it with the fluorescence of dental plaque on the quantitative light-induced fluorescence (QLF) image to obtain primary data for the development of a new dental plaque scoring system. Methods: Twenty-eight subjects who consented to participate after understanding the purpose of the study were screened. The images of the anterior teeth were obtained using the QLF device. Subsequently, dental plaque was stained with a two-tone disclosing solution and a photograph was obtained with a digital single-lens reflex (DSLR) camera. The staining scores were assigned as follows: 0 for no staining, 1 for pink staining, and 2 for blue staining. The marked points on the DSLR images were selected for RGB color analysis. The relationship between dental plaque maturation and the red/green (R/G) ratio was evaluated using Spearman's rank correlation. Additionally, different red fluorescence values according to dental plaque accumulation were assessed using one-way analysis of variance followed by Scheffe's post-hoc test to identify statistically significant differences between the groups. Results: A comparison of the intensity of red fluorescence according to the maturation of the two-tone stained dental plaque confirmed that R/G ratio was higher in the QLF images with dental plaque maturation (p<0.001). Correlation analysis between the stained dental plaque and the red fluorescence intensity in the QLF image confirmed an excellent positive correlation (p<0.001). Conclusion: A new plaque scoring system can be developed based on the results of the present study. In addition, these study results may also help in dental plaque management in the clinical setting.

Adversarial Example Detection Based on Symbolic Representation of Image (이미지의 Symbolic Representation 기반 적대적 예제 탐지 방법)

  • Park, Sohee;Kim, Seungjoo;Yoon, Hayeon;Choi, Daeseon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.975-986
    • /
    • 2022
  • Deep learning is attracting great attention, showing excellent performance in image processing, but is vulnerable to adversarial attacks that cause the model to misclassify through perturbation on input data. Adversarial examples generated by adversarial attacks are minimally perturbated where it is difficult to identify, so visual features of the images are not generally changed. Unlikely deep learning models, people are not fooled by adversarial examples, because they classify the images based on such visual features of images. This paper proposes adversarial attack detection method using Symbolic Representation, which is a visual and symbolic features such as color, shape of the image. We detect a adversarial examples by comparing the converted Symbolic Representation from the classification results for the input image and Symbolic Representation extracted from the input images. As a result of measuring performance on adversarial examples by various attack method, detection rates differed depending on attack targets and methods, but was up to 99.02% for specific target attack.

Virtual Fitting System Using Deep Learning Methodology: HR-VITON Based on Weight Sharing, Mixed Precison & Gradient Accumulation (딥러닝 의류 가상 합성 모델 연구: 가중치 공유 & 학습 최적화 기반 HR-VITON 기법 활용)

  • Lee, Hyun Sang;Oh, Se Hwan;Ha, Sung Ho
    • The Journal of Information Systems
    • /
    • v.31 no.4
    • /
    • pp.145-160
    • /
    • 2022
  • Purpose The purpose of this study is to develop a virtual try-on deep learning model that can efficiently learn front and back clothes images. It is expected that the application of virtual try-on clothing service in the fashion and textile industry field will be vitalization. Design/methodology/approach The data used in this study used 232,355 clothes and product images. The image data input to the model is divided into 5 categories: original clothing image and wearer image, clothing segmentation, wearer's body Densepose heatmap, wearer's clothing-agnosting. We advanced the HR-VITON model in the way of Mixed-Precison, Gradient Accumulation, and sharing model weights. Findings As a result of this study, we demonstrated that the weight-shared MP-GA HR-VITON model can efficiently learn front and back fashion images. As a result, this proposed model quantitatively improves the quality of the generated image compared to the existing technique, and natural fitting is possible in both front and back images. SSIM was 0.8385 and 0.9204 in CP-VTON and the proposed model, LPIPS 0.2133 and 0.0642, FID 74.5421 and 11.8463, and KID 0.064 and 0.006. Using the deep learning model of this study, it is possible to naturally fit one color clothes, but when there are complex pictures and logos as shown in <Figure 6>, an unnatural pattern occurred in the generated image. If it is advanced based on the transformer, this problem may also be improved.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Development of the Multi-Parametric Mapping Software Based on Functional Maps to Determine the Clinical Target Volumes (임상표적체적 결정을 위한 기능 영상 기반 생물학적 인자 맵핑 소프트웨어 개발)

  • Park, Ji-Yeon;Jung, Won-Gyun;Lee, Jeong-Woo;Lee, Kyoung-Nam;Ahn, Kook-Jin;Hong, Se-Mie;Juh, Ra-Hyeong;Choe, Bo-Young;Suh, Tae-Suk
    • Progress in Medical Physics
    • /
    • v.21 no.2
    • /
    • pp.153-164
    • /
    • 2010
  • To determine the clinical target volumes considering vascularity and cellularity of tumors, the software was developed for mapping of the analyzed biological clinical target volumes on anatomical images using regional cerebral blood volume (rCBV) maps and apparent diffusion coefficient (ADC) maps. The program provides the functions for integrated registrations using mutual information, affine transform and non-rigid registration. The registration accuracy is evaluated by the calculation of the overlapped ratio of segmented bone regions and average distance difference of contours between reference and registered images. The performance of the developed software was tested using multimodal images of a patient who has the residual tumor of high grade gliomas. Registration accuracy of about 74% and average 2.3 mm distance difference were calculated by the evaluation method of bone segmentation and contour extraction. The registration accuracy can be improved as higher as 4% by the manual adjustment functions. Advanced MR images are analyzed using color maps for rCBV maps and quantitative calculation based on region of interest (ROI) for ADC maps. Then, multi-parameters on the same voxels are plotted on plane and constitute the multi-functional parametric maps of which x and y axis representing rCBV and ADC values. According to the distributions of functional parameters, tumor regions showing the higher vascularity and cellularity are categorized according to the criteria corresponding malignant gliomas. Determined volumes reflecting pathological and physiological characteristics of tumors are marked on anatomical images. By applying the multi-functional images, errors arising from using one type of image would be reduced and local regions representing higher probability as tumor cells would be determined for radiation treatment plan. Biological tumor characteristics can be expressed using image registration and multi-functional parametric maps in the developed software. The software can be considered to delineate clinical target volumes using advanced MR images with anatomical images.