• Title/Summary/Keyword: deep color

Search Result 566, Processing Time 0.029 seconds

Changes of Chromaticity and Mineral Contents of Laver Dishes using Various Cooking Methods (조리 방법에 따른 김의 색도와 무기 성분 함량 변화)

  • 한재숙;이연정;윤미라
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.13 no.4
    • /
    • pp.326-333
    • /
    • 2003
  • The purpose of this study was to investigate the effect of various cooking methods(roasted, salad, deep-fried seasoned-roasted and commercial laver) on mineral contents, color and sensory evaluation of laver. The contents of mineral of dried laver by various cooking methods were analyzed using the Inductively Coupled Plasma(ICP) system. The results were summarized as follows : The content of crude protein, moisture, ash and crude fat in dried laver were 35.1%, 10.6%, 9.7% and 0.8%, respectively. Among the minerals of dried laver, the content of poassium was the highest (2268.0mg/100g d.w.) and those of calcium and iron were comparatively high (495.1mg/100g, 13.5mg/100g). Ca/P ratio of dried laver was about 1:1 levels. Among various laver dishes, the total mineral content was the highest in the roasted laver, but low in the deep-fried laver. Among color values by cooking methods, "L(lightness)" and " - a(greenness)" values were the highest in the roasted laver, and "b(yellowness)" was the highest in the deep-fried laver. The seasoned-roasted laver was highly scored by the sensory evaluation.

  • PDF

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

Deep Learning-based Interior Design Recognition (딥러닝 기반 실내 디자인 인식)

  • Wongyu Lee;Jihun Park;Jonghyuk Lee;Heechul Jung
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.19 no.1
    • /
    • pp.47-55
    • /
    • 2024
  • We spend a lot of time in indoor space, and the space has a huge impact on our lives. Interior design plays a significant role to make an indoor space attractive and functional. However, it should consider a lot of complex elements such as color, pattern, and material etc. With the increasing demand for interior design, there is a growing need for technologies that analyze these design elements accurately and efficiently. To address this need, this study suggests a deep learning-based design analysis system. The proposed system consists of a semantic segmentation model that classifies spatial components and an image classification model that classifies attributes such as color, pattern, and material from the segmented components. Semantic segmentation model was trained using a dataset of 30000 personal indoor interior images collected for research, and during inference, the model separate the input image pixel into 34 categories. And experiments were conducted with various backbones in order to obtain the optimal performance of the deep learning model for the collected interior dataset. Finally, the model achieved good performance of 89.05% and 0.5768 in terms of accuracy and mean intersection over union (mIoU). In classification part convolutional neural network (CNN) model which has recorded high performance in other image recognition tasks was used. To improve the performance of the classification model we suggests an approach that how to handle data that has data imbalance and vulnerable to light intensity. Using our methods, we achieve satisfactory results in classifying interior design component attributes. In this paper, we propose indoor space design analysis system that automatically analyzes and classifies the attributes of indoor images using a deep learning-based model. This analysis system, used as a core module in the A.I interior recommendation service, can help users pursuing self-interior design to complete their designs more easily and efficiently.

Current Status of the Infrared Medium Deep Survey

  • Jun, Hyun-Sung;Jeon, Yi-Seul;Im, Myung-Shin;CEOUIMSteam, CEOUIMSteam
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.37.2-37.2
    • /
    • 2010
  • The IMS (Infrared or Intermediate-wide, Medium-deep Survey) program for the search of z~7 quasars has been running since last year. In order to discover enough number of quasars at z~7, a strategy sufficing both survey area (~150 square deg.) and image depth (23 AB mag in J filter), together with using existing multi-wavelength data is chosen. We have been carrying imaging observations with the UKIRT 4m telescope, now covering ~50 square deg. (including UKIDSS survey area) of J-band data. We then used selection in color-color space to choose high-z quasar candidates having the rest-frame Ly-alpha break, and to exclude contamination from stars and galaxies at low-z. We show quasar candidates of redshift z~7 and z~6, out of 25 square deg. data analyzed, and note implications and future plans.

  • PDF

Hair Removal on Face Images using a Deep Neural Network (심층 신경망을 이용한 얼굴 영상에서의 헤어 영역 제거)

  • Lumentut, Jonathan Samuel;Lee, Jungwoo;Park, In Kyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.163-165
    • /
    • 2019
  • The task of image denoising is gaining popularity in the computer vision research field. Its main objective of restoring the sharp image from given noisy input is demanded in all image processing procedure. In this work, we treat the process of residual hair removal on faces images similar to the task of image denoising. In particular, our method removes the residual hair that presents on the frontal or profile face images and in-paints it with the relevant skin color. To achieve this objective, we employ a deep neural network that able to perform both tasks in one time. Furthermore, simple technic of residual hair color augmentation is introduced to increase the number of training data. This approach is beneficial for improving the robustness of the network. Finally, we show that the experimental results demonstrate the superiority of our network in both quantitative and qualitative performances.

  • PDF

Implementation and Verification of Deep Learning-based Automatic Object Tracking and Handy Motion Control Drone System (심층학습 기반의 자동 객체 추적 및 핸디 모션 제어 드론 시스템 구현 및 검증)

  • Kim, Youngsoo;Lee, Junbeom;Lee, Chanyoung;Jeon, Hyeri;Kim, Seungpil
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.16 no.5
    • /
    • pp.163-169
    • /
    • 2021
  • In this paper, we implemented a deep learning-based automatic object tracking and handy motion control drone system and analyzed the performance of the proposed system. The drone system automatically detects and tracks targets by analyzing images obtained from the drone's camera using deep learning algorithms, consisting of the YOLO, the MobileNet, and the deepSORT. Such deep learning-based detection and tracking algorithms have both higher target detection accuracy and processing speed than the conventional color-based algorithm, the CAMShift. In addition, in order to facilitate the drone control by hand from the ground control station, we classified handy motions and generated flight control commands through motion recognition using the YOLO algorithm. It was confirmed that such a deep learning-based target tracking and drone handy motion control system stably track the target and can easily control the drone.

Automated ground penetrating radar B-scan detection enhanced by data augmentation techniques

  • Donghwi Kim;Jihoon Kim;Heejung Youn
    • Geomechanics and Engineering
    • /
    • v.38 no.1
    • /
    • pp.29-44
    • /
    • 2024
  • This research investigates the effectiveness of data augmentation techniques in the automated analysis of B-scan images from ground-penetrating radar (GPR) using deep learning. In spite of the growing interest in automating GPR data analysis and advancements in deep learning for image classification and object detection, many deep learning-based GPR data analysis studies have been limited by the availability of large, diverse GPR datasets. Data augmentation techniques are widely used in deep learning to improve model performance. In this study, we applied four data augmentation techniques (geometric transformation, color-space transformation, noise injection, and applying kernel filter) to the GPR datasets obtained from a testbed. A deep learning model for GPR data analysis was developed using three models (Faster R-CNN ResNet, SSD ResNet, and EfficientDet) based on transfer learning. It was found that data augmentation significantly enhances model performance across all cases, with the mAP and AR for the Faster R-CNN ResNet model increasing by approximately 4%, achieving a maximum mAP (Intersection over Union = 0.5:1.0) of 87.5% and maximum AR of 90.5%. These results highlight the importance of data augmentation in improving the robustness and accuracy of deep learning models for GPR B-scan analysis. The enhanced detection capabilities achieved through these techniques contribute to more reliable subsurface investigations in geotechnical engineering.

A Study on the Preferences of the Children's Clothing and Color Image. (아동복의 선호이미지와 선호색채 이미지에 관한 조사연구)

  • 추선형
    • Journal of the Korean Society of Costume
    • /
    • v.50
    • /
    • pp.23-32
    • /
    • 2000
  • This study was performed to investigate the mothers' preferences for their children's clothing images and color images. Questionaries were analysed by factor analysis cluster analysis one way ANOVA x2 -test. The results are as follow: First the preferred clothing images for children have no differences across gender. The preferred images are active tidy and fashionable. Second the preferred clothing images are according to season. In the case of clothing color images the preferences in spring and summer seasons are differed from fall and winter seasons. Third factors of boy's clothing image were fashionable cute splendid and classic And factors for girl's clothing image were lively tidy fashionable and classic. These factors revealed the differences of preferences between boys and girls clothing image. Fourth the preferences of clothing color image for children changed across seasons. Bright and neat color images were preferred in spring and summer warm and deep color images were preferred in fall and winter. Fifty the preferences of clothing images classified into four groups and each group has the different preference in color tone. Finally Season and gender were revealed to be more important variables in the preferences of children's clothing image and color image.

  • PDF

A Study on the Color of Natural Solvent for the Red Color Reproduction of Safflower

  • Lee, Mi Young;Wi, Koang Chul
    • Journal of Conservation Science
    • /
    • v.37 no.1
    • /
    • pp.13-24
    • /
    • 2021
  • Safflower, a natural dye representing red, is the dye that materials and dyeing method are recorded in the literature, including materials and dyeing. Although the safflower is the same, the ash used as a mordant is recorded differently in each literature, which greatly affects the aesthetic perspective in realizing the traditional safflower red. Therefore, the optimal conditions for realizing the traditional safflower red were sought. The experiment was conducted by pH investigation, dyeing and color analysis by dyeing solution water, concentration, and temperature by ash, and the unique color of red was confirmed. As a result of the test, the pH point of time when the uniq ue color was expressed was 11.53 as goosefoot ash (natural bedrock water), which was 1:100 for concentration and 70℃ for temperature, which was easier to extract red pigment than other ash, indicating that it is suitable for safflower dyeing. The analysis of the ash showed that K and Si play an important role in dyeing, especially Si, which is an element that inhibits carthamon. The color of red was similar to that of KS Standard vivid purplish red in the ash of the goosefoot, and the other ash was deep purplish pink. In the light of findings, it was possible to quantify the dyeing method through traditional materials and find the standard color of red color, and it is judged to be a basic data for studying the unique color of natural materials.

Traffic Light Recognition Using a Deep Convolutional Neural Network (심층 합성곱 신경망을 이용한 교통신호등 인식)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.11
    • /
    • pp.1244-1253
    • /
    • 2018
  • The color of traffic light is sensitive to various illumination conditions. Especially it loses the hue information when oversaturation happens on the lighting area. This paper proposes a traffic light recognition method robust to these illumination variations. The method consists of two steps of traffic light detection and recognition. It just uses the intensity and saturation in the first step of traffic light detection. It delays the use of hue information until it reaches to the second step of recognizing the signal of traffic light. We utilized a deep learning technique in the second step. We designed a deep convolutional neural network(DCNN) which is composed of three convolutional networks and two fully connected networks. 12 video clips were used to evaluate the performance of the proposed method. Experimental results show the performance of traffic light detection reporting the precision of 93.9%, the recall of 91.6%, and the recognition accuracy of 89.4%. Considering that the maximum distance between the camera and traffic lights is 70m, the results shows that the proposed method is effective.