• Title/Summary/Keyword: color images

Search Result 2,715, Processing Time 0.026 seconds

Simple image artifact removal technique for more accurate iris diagnosis

  • Kim, Jeong-lae;Kim, Soon Bae;Jung, Hae Ri;Lee, Woo-cheol;Jeong, Hyun-Woo
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.169-173
    • /
    • 2018
  • Iris diagnosis based on the color and texture information is one of a novel approach which can represent the current state of a certain organ inside body or the health condition of a person. In analysis of the iris images, there are critical image artifacts which can prevent of use interpretation of the iris textures on images. Here, we developed the iris diagnosis system based on a hand-held typed imaging probe which consists of a single camera sensor module with 8M pixels, two pairs of 400~700 nm LED, and a guide beam. Two original images with different light noise pattern were successively acquired in turns, and the light noise-free image was finally reconstructed and demonstrated by the proposed artifact removal approach.

Attention-based for Multiscale Fusion Underwater Image Enhancement

  • Huang, Zhixiong;Li, Jinjiang;Hua, Zhen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.2
    • /
    • pp.544-564
    • /
    • 2022
  • Underwater images often suffer from color distortion, blurring and low contrast, which is caused by the propagation of light in the underwater environment being affected by the two processes: absorption and scattering. To cope with the poor quality of underwater images, this paper proposes a multiscale fusion underwater image enhancement method based on channel attention mechanism and local binary pattern (LBP). The network consists of three modules: feature aggregation, image reconstruction and LBP enhancement. The feature aggregation module aggregates feature information at different scales of the image, and the image reconstruction module restores the output features to high-quality underwater images. The network also introduces channel attention mechanism to make the network pay more attention to the channels containing important information. The detail information is protected by real-time superposition with feature information. Experimental results demonstrate that the method in this paper produces results with correct colors and complete details, and outperforms existing methods in quantitative metrics.

Development and Implementation of a YOLOv5-based Adhesive Application Defect Detection Algorithm

  • Jung-kyu Park;Doo-Hyun Choi
    • Journal of Sensor Science and Technology
    • /
    • v.33 no.6
    • /
    • pp.510-515
    • /
    • 2024
  • This study investigated the use of YOLOv5 for defect detection in transparent adhesives, comparing two distinct training methods: one without preprocessing and another incorporating edge operator preprocessing. In the first approach, the original color images were labeled in various ways and trained without transformation. This method failed to distinguish between the original images with properly applied adhesive and those exhibiting adhesive application defects. An analysis of the factors contributing to the reduced learning performance was conducted using histogram comparison and template matching, with performance validated by maximum similarity measurements, quantified by the Intersection over Union values. Conversely, the preprocessing method involved transforming the original images using edge operators before training. The experiments confirmed that the Canny Edge Detection operator was particularly effective for detecting adhesive application defects and proved most suitable for real-time defect detection.

Color-related Query Processing for Intelligent E-Commerce Search (지능형 검색엔진을 위한 색상 질의 처리 방안)

  • Hong, Jung A;Koo, Kyo Jung;Cha, Ji Won;Seo, Ah Jeong;Yeo, Un Yeong;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.109-125
    • /
    • 2019
  • As interest on intelligent search engines increases, various studies have been conducted to extract and utilize the features related to products intelligencely. In particular, when users search for goods in e-commerce search engines, the 'color' of a product is an important feature that describes the product. Therefore, it is necessary to deal with the synonyms of color terms in order to produce accurate results to user's color-related queries. Previous studies have suggested dictionary-based approach to process synonyms for color features. However, the dictionary-based approach has a limitation that it cannot handle unregistered color-related terms in user queries. In order to overcome the limitation of the conventional methods, this research proposes a model which extracts RGB values from an internet search engine in real time, and outputs similar color names based on designated color information. At first, a color term dictionary was constructed which includes color names and R, G, B values of each color from Korean color standard digital palette program and the Wikipedia color list for the basic color search. The dictionary has been made more robust by adding 138 color names converted from English color names to foreign words in Korean, and with corresponding RGB values. Therefore, the fininal color dictionary includes a total of 671 color names and corresponding RGB values. The method proposed in this research starts by searching for a specific color which a user searched for. Then, the presence of the searched color in the built-in color dictionary is checked. If there exists the color in the dictionary, the RGB values of the color in the dictioanry are used as reference values of the retrieved color. If the searched color does not exist in the dictionary, the top-5 Google image search results of the searched color are crawled and average RGB values are extracted in certain middle area of each image. To extract the RGB values in images, a variety of different ways was attempted since there are limits to simply obtain the average of the RGB values of the center area of images. As a result, clustering RGB values in image's certain area and making average value of the cluster with the highest density as the reference values showed the best performance. Based on the reference RGB values of the searched color, the RGB values of all the colors in the color dictionary constructed aforetime are compared. Then a color list is created with colors within the range of ${\pm}50$ for each R value, G value, and B value. Finally, using the Euclidean distance between the above results and the reference RGB values of the searched color, the color with the highest similarity from up to five colors becomes the final outcome. In order to evaluate the usefulness of the proposed method, we performed an experiment. In the experiment, 300 color names and corresponding color RGB values by the questionnaires were obtained. They are used to compare the RGB values obtained from four different methods including the proposed method. The average euclidean distance of CIE-Lab using our method was about 13.85, which showed a relatively low distance compared to 3088 for the case using synonym dictionary only and 30.38 for the case using the dictionary with Korean synonym website WordNet. The case which didn't use clustering method of the proposed method showed 13.88 of average euclidean distance, which implies the DBSCAN clustering of the proposed method can reduce the Euclidean distance. This research suggests a new color synonym processing method based on RGB values that combines the dictionary method with the real time synonym processing method for new color names. This method enables to get rid of the limit of the dictionary-based approach which is a conventional synonym processing method. This research can contribute to improve the intelligence of e-commerce search systems especially on the color searching feature.

Psychological Stability Color for The Fire Escape Mobile App (심리적 안정감을 주는 화재 피난 모바일 앱(App) 컬러연구)

  • Lee, Sang ki;Park, Hae Rim
    • Journal of Service Research and Studies
    • /
    • v.12 no.2
    • /
    • pp.106-116
    • /
    • 2022
  • As part of the Fire Evacuation Service scenario using mobile applications, this study aims to find the appropriate colors to be used in the interface of the application and to define and apply colors that can positively and reliably affect human unstable psychology in the course of evacuating the room in case of fire. In the situation of fire, proper design and placement of the colored escape guidance interface is important, taking into account the psychology of the occupants. However, literature and previous research have shown that colors used to induce evacuation are not suitable for effective evacuation in case of fire. In this study, the purpose of the study was to provide a color that would provide psychological stability in the event of a evacuation in consideration of the psychological issues of those who are still in need of shelter, and to use it to help induce an efficient evacuation in the event of a disaster. Using the image evaluation method, the form and color of images have been derived through frequency analysis to a number of unspecified people, and the main and secondary colors of images were analyzed through KSCA color analysis. Finally, the final application color was constructed through mutual verification between the results by comparing and analyzing the colors obtained through the image evaluation analysis results and the KSCA color analysis results. The results of the study showed that the green line can help stabilize the human mind through comparative analysis with prior research. Therefore, the main color for guiding calm and calm applications in case of fire escape is proposed in the green line. In this study, the experiment with image evaluation cannot accurately measure the effect of factors on color among complex factors. A subsequent study of this will help quantify images if it allows the subject matter of color and image to be defined to some extent through factor analysis.

A Study on Steganographic Method for Binary Images (이진영상을 위한 심층암호 기법에 관한 연구)

  • Ha Soon-Hye;Kang Hyun-Ho;Lee Hye-Joo;Shin Sang-Uk;Park Young-Ran
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.2
    • /
    • pp.215-225
    • /
    • 2006
  • Binary images, such as cartoon character images, text images and signature images, which consist of two values with black and white have more difficulties inserting imperceptible secret data than color images. Steganography using binary cover images is not easy to satisfy requirements for both the imperceptibility of stego images and a high embedding rate of secret data at the same time. In this paper, we propose a scheme that can get both the high quality of stego images and a high embedding rate by supplementing the advantages of previous research. In addition, the insertion of the proposed method changes only existing pixels of the imperceptible position and can embed the secret data of [$log_2(mn+1)-2$] bits in a block with size of $m{\times}n$.

  • PDF

Development of Frequency Domain Matching for Automated Mosaicking of Textureless Images (텍스쳐 정보가 없는 영상의 자동 모자이킹을 위한 주파수영역 매칭기법 개발)

  • Kim, Han-Gyeol;Kim, Jae-In;Kim, Taejung
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.6
    • /
    • pp.693-701
    • /
    • 2016
  • To make a mosaicked image, we need to estimate the geometric relationship between individual images. For such estimation, we needs tiepoint information. In general, feature-based methods are used to extract tiepoints. However, in the case of textureless images, feature-based methods are hardly applicable. In this paper, we propose a frequency domain matching method for automated mosaicking of textureless images. There are three steps in the proposed method. The first step is to convert color images to grayscale images, remove noise, and extract edges. The second step is to define a Region Of Interest (ROI). The third step is to perform phase correlation between two images and select the point with best correlation as tiepoints. For experiments, we used GOCI image slots and general frame camera images. After the three steps, we produced reliable tiepoints from textureless as well as textured images. We have proved application possibility of the proposed method.

Fashion Accessory Design Suggestions Using Firework Images with the OLED Display Platform (불꽃놀이 형상과 OLED를 기반으로 한 패션 액세서리 디자인 제안)

  • Kim, Sun-Young
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.35 no.10
    • /
    • pp.1188-1198
    • /
    • 2011
  • This study proposes the use of firework shapes to design fashion accessories in the judgment that they are appropriate for the expression of creative images in consideration of the display of fireworks as a kind of entertainment and a festive symbol. This study promotes the sustainable application of firework shapes to develop the designs of fashion culture items that feature a distinctive personality and uniqueness. In this present study, the proposed fashion accessory design was intended to create an entertaining new atmosphere that uses an Organic Light Emitting Diode (OLED) that draws attention as a futuristic display. In terms of methodology, a literature review of firework shapes and OLED was conducted; in addition, Adobe Illustrator CS2 and Adobe Photoshop CS2 were used to develop six different standard motive designs with formative design elements represented by a variety of firework shapes. Each of the six motifs was further expanded with different color combinations. Rich images are produced with the use of pink, blue, purple, green, yellow, orange, and red, in conjunction with various OLED effects to express the three-dimensional images of fireworks. The motifs are applied to three types of items: bags, bracelets, and necklaces. For the video images, evening and tote bags, pendants, and bangles were used. Shifting images and lights should produce unique images as well as satisfy the consumer desire for entertainment. The Adobe Image Ready software was used to present the motive of fireworks applied to the design of fashion accessories in video images but not in still-cut images due to physical constraints of this paper.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.