• Title/Summary/Keyword: deep color

Search Result 570, Processing Time 0.026 seconds

Behavioral response of purpleback flying squid Sthenoteuthis oualaniensis (Mollusk; Cephalopod) to the flashlight artificial bait colors

  • Lefrand Manoppo;Silvester Benny Pratasik;Effendi P. Sitanggang;Lusia Manu;Juliaan Cheyvert Watung
    • Fisheries and Aquatic Sciences
    • /
    • v.26 no.5
    • /
    • pp.336-343
    • /
    • 2023
  • This study aimed to know the response of deep-sea squid Sthenoteuthis oualaniensis to the light colors of the artificial bait. This experiment used the commercial artificial flashlight baits commonly sold in the fishing shop. The bait has several different light color combinations. The light colors were modified into several light colors by inactivating certain colors and used as treatments. The study is expected to be able to find flashlight bait's most effective color for squid fishing. We applied red-green, green, blue, and commercial bait lights in this study. Each treatment has 3 replications. The effect was expressed as the amount of squid caught. Data were analyzed by one-way analysis of variance. Results showed a significant effect on the number of squid catches. There was significantly different squid catches among the treatments. It indicates that this artificial flashlight bait could be developed to maximize squid catches. This finding can be used for the local fishermen's income and the squid fisheries development.

Image-based fire area segmentation method by removing the smoke area from the fire scene videos (화재 현장 영상에서 연기 영역을 제외한 이미지 기반 불의 영역 검출 기법)

  • KIM, SEUNGNAM;CHOI, MYUNGJIN;KIM, SUN-JEONG;KIM, CHANG-HUN
    • Journal of the Korea Computer Graphics Society
    • /
    • v.28 no.4
    • /
    • pp.23-30
    • /
    • 2022
  • In this paper, we propose an algorithm that can accurately segment a fire even when it is surrounded by smoke of a similar color. Existing fire area segmentation algorithms have a problem in that they cannot separate fire and smoke from fire images. In this paper, the fire was successfully separated from the smoke by applying the color compensation method and the fog removal method as a preprocessing process before applying the fire area segmentation algorithm. In fact, it was confirmed that it segments fire more effectively than the existing methods in the image of the fire scene covered with smoke. In addition, we propose a method that can use the proposed fire segmentation algorithm for efficient fire detection in factories and homes.

Correlation between optimized thicknesses of capping layer and thin metal electrode for efficient top-emitting blue organic light-emitting diodes

  • Hyunsu Cho;Chul Woong Joo;Byoung-Hwa Kwon;Chan-mo Kang;Sukyung Choi;Jin Wook Sin
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.1056-1064
    • /
    • 2023
  • The optical properties of the materials composing organic light-emitting diodes (OLEDs) are considered when designing the optical structure of OLEDs. Optical design is related to the optical properties, such as the efficiency, emission spectra, and color coordinates of OLED devices because of the microcavity effect in top-emitting OLEDs. In this study, the properties of top-emitting blue OLEDs were optimized by adjusting the thicknesses of the thin metal layer and capping layer (CPL). Deep blue emission was achieved in an OLED structure with a second cavity length, even when the transmittance of the thin metal layer was high. The thin metal film thickness ranges applicable to OLEDs with a second microcavity structure are wide. Instead, the thickness of the thin metal layer determines the optimized thickness of the CPL for high efficiency. A thinner metal layer means that higher efficiency can be obtained in OLED devices with a second microcavity structure. In addition, OLEDs with a thinner metal layer showed less color change as a function of the viewing angle.

Comparison on the Deep Learning Performance of a Field of View Variable Color Images of Uterine Cervix (컬러 자궁경부 영상에서 딥러닝 기법에서의 영상영역 처리 방법에 따른 성능 비교 연구)

  • Seol, Yu Jin;Kim, Young Jae;Nam, Kye Hyun;Kim, Kwang Gi
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.7
    • /
    • pp.812-818
    • /
    • 2020
  • Cervical cancer is the second most common female cancer in the world. In Korea, cervical cancer accounts for 13 percent of female cancers and 4,200 cases occur annually[1]. The purpose of this study is to use a deep learning model to identify the possibility of lesions in the cervix and to evaluate the efficient image preprocessing in order to diagnose diverse types of cervix in form. The study used 4,107 normal photographs of uterine cervix and 6,285 abnormal photographs of uterine cervix. Two types of image preprocessing were resized to square. The methods are cropping based on height and filling the space up and down with black images. In addition, all images were resampled to 256×256. The average accuracy of cropped cases is 94.15%. The average accuracy of the filled cases is 93.41%. According to the study, the model performance of cropped data was slightly better. But there were several images that were not accurately classified. Therefore, the additional experiment with pre-treatment process based on cropping is needed to cover images of the cervix in more detail.

PROPERTIES OF DUST OBSCURED GALAXIES IN THE NEP-DEEP FIELD

  • Oi, Nagisa;Matsuhara, Hideo;Pearson, Chris;Buat, Veronique;Burgarella, Denis;Malkan, Matt;Miyaji, Takamitsu;AKARI-NEP team
    • Publications of The Korean Astronomical Society
    • /
    • v.32 no.1
    • /
    • pp.245-249
    • /
    • 2017
  • We selected 47 DOGs at z ~ 1.5 using optical R (or r'), AKARI $18{\mu}m$, and $24{\mu}m$ color in the AKARI North Ecliptic Pole (NEP) Deep survey field. Using the colors among 3, 4, 7, and 9µm, we classified them into 3 groups; bump DOGs (23 sources), power-law DOGs (16 sources), and unknown DOGs (8 sources). We built spectral energy distributions (SEDs) with optical to far-infrared photometric data and investigated their properties using SED fitting method. We found that AGN activity such as a AGN contribution to the infrared luminosity and a Chandra detection rate for bump and power-law DOGs are significantly different, while stellar component properties like a stellar mass and a star-formation rate are similar to each other. A specific star-formation rate range of power-law DOGs is slightly higher than that of bump DOGs with wide overlap. Herschel/PACS detection rates are almost the same between bump and power-law DOGs. On the other hand SPIRE detection rates show large differences between bump and power-law DOGs. These results might be explained by differences in dust temperatures. Both groups of DOGs host hot and/or warm dust (~ 50 Kelvin), and many bump DOGs contain cooler dust (${\leq}30$ Kelvin).

High-Capacity Robust Image Steganography via Adversarial Network

  • Chen, Beijing;Wang, Jiaxin;Chen, Yingyue;Jin, Zilong;Shim, Hiuk Jae;Shi, Yun-Qing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.1
    • /
    • pp.366-381
    • /
    • 2020
  • Steganography has been successfully employed in various applications, e.g., copyright control of materials, smart identity cards, video error correction during transmission, etc. Deep learning-based steganography models can hide information adaptively through network learning, and they draw much more attention. However, the capacity, security, and robustness of the existing deep learning-based steganography models are still not fully satisfactory. In this paper, three models for different cases, i.e., a basic model, a secure model, a secure and robust model, have been proposed for different cases. In the basic model, the functions of high-capacity secret information hiding and extraction have been realized through an encoding network and a decoding network respectively. The high-capacity steganography is implemented by hiding a secret image into a carrier image having the same resolution with the help of concat operations, InceptionBlock and convolutional layers. Moreover, the secret image is hidden into the channel B of carrier image only to resolve the problem of color distortion. In the secure model, to enhance the security of the basic model, a steganalysis network has been added into the basic model to form an adversarial network. In the secure and robust model, an attack network has been inserted into the secure model to improve its robustness further. The experimental results have demonstrated that the proposed secure model and the secure and robust model have an overall better performance than some existing high-capacity deep learning-based steganography models. The secure model performs best in invisibility and security. The secure and robust model is the most robust against some attacks.

Irrigation Frequency for Kentucky Bluegrass (Poa pratensis) Growth (관수빈도에 따른 Kentucky Bluegrass 생육)

  • Lee, Sang-Kook
    • Asian Journal of Turfgrass Science
    • /
    • v.26 no.2
    • /
    • pp.123-128
    • /
    • 2012
  • Kentucky bluegrass (Poa pratensis) is most widely used in golf courses and athletic fields. Weakness of Kentucky bluegrass is shallow root zone and has weak tolerance to shade. One of the biggest disadvantages is high demand of water. Water content is important factor to maintain excellent color and quality of turfgrass. There are two irrigation methods which are 'deep and infrequent (DI)' and 'Light and frequent (LI)'. The objective of the study is to investigate Kentucky bluegrass growth treated by different irrigation frequency. Three irrigation frequency were made; no irrigation, every other day, and weekly. The same amount of water was used between every other day and weekly irrigation except no irrigation. No irrigation mean no artificial water supply and precipitation only. No irrigation treatment produced turfgrass quality lower than acceptable rating of six in July and August. Under the weather condition of 2011, no irrigation could not maintained acceptable turfgrass quality. No significant differences were found for Kentucky bluegrass quality between DI and LI.

A New Self-Incompatible Interspecific Hybrid Pasqueflower Variety, 'Yeonhong' (자가불화합성 종간교잡종 할미꽃 신품종 '연홍')

  • Lee, Ya-Seong;Kim, Dong-Kwan;Choi, Duck-Soo;Choi, Jin-Kyung;Son, Dong-Mo;Choi, Kyeong-Ju;Baek, Hyeong-Jin;Rim, Yo-Sup
    • Korean Journal of Breeding Science
    • /
    • v.42 no.3
    • /
    • pp.241-244
    • /
    • 2010
  • A new hybrid variety of pasqueflower, 'Yeonhong', was derived from an interspecific cross between Pulsatilla davurica and P. koreana at the Jeollanamdo Agricultural Research and Extension Services (JARES) in Naju, Korea. The original cross was performed in 2001, and the 'Yeonhong' hybrid was established in 2003 after two years of selective breeding. 'Yeonhong' is characterized by deep pink flowers and is self-incompatible. The inherent characteristics of the variety are deep yellow anthers, a deep pink stigma, six petals, and green leaf color. 'Yeonhong' blooms twice a year. The agronomic characteristics of the variety are 26.5 flowers per plant, 58.6 cm in flower height, 46.8 cm in cut flower length, 11.5 cm in bract width, and 34.4 cm in leaf length. Flower characteristics include 8.4 days of longevity for cut flowers and 34 days of longevity for bracts. The new variety is suitable for use as a cut flower.

Data Augmentation Method for Deep Learning based Medical Image Segmentation Model (딥러닝 기반의 대퇴골 영역 분할을 위한 훈련 데이터 증강 연구)

  • Choi, Gyujin;Shin, Jooyeon;Kyung, Joohyun;Kyung, Minho;Lee, Yunjin
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.3
    • /
    • pp.123-131
    • /
    • 2019
  • In this study, we modified CT images of femoral head in consideration of anatomically meaningful structure, proposing the method to augment the training data of convolution Neural network for segmentation of femur mesh model. First, the femur mesh model is obtained from the CT image. Then divide the mesh model into meaningful parts by using cluster analysis on geometric characteristic of mesh surface. Finally, transform the segments by using an appropriate mesh deformation algorithm, then create new CT images by warping CT images accordingly. Deep learning models using the data enhancement methods of this study show better image division performance compared to data augmentation methods which have been commonly used, such as geometric conversion or color conversion.

Estimating vegetation index for outdoor free-range pig production using YOLO

  • Sang-Hyon Oh;Hee-Mun Park;Jin-Hyun Park
    • Journal of Animal Science and Technology
    • /
    • v.65 no.3
    • /
    • pp.638-651
    • /
    • 2023
  • The objective of this study was to quantitatively estimate the level of grazing area damage in outdoor free-range pig production using a Unmanned Aerial Vehicles (UAV) with an RGB image sensor. Ten corn field images were captured by a UAV over approximately two weeks, during which gestating sows were allowed to graze freely on the corn field measuring 100 × 50 m2. The images were corrected to a bird's-eye view, and then divided into 32 segments and sequentially inputted into the YOLOv4 detector to detect the corn images according to their condition. The 43 raw training images selected randomly out of 320 segmented images were flipped to create 86 images, and then these images were further augmented by rotating them in 5-degree increments to create a total of 6,192 images. The increased 6,192 images are further augmented by applying three random color transformations to each image, resulting in 24,768 datasets. The occupancy rate of corn in the field was estimated efficiently using You Only Look Once (YOLO). As of the first day of observation (day 2), it was evident that almost all the corn had disappeared by the ninth day. When grazing 20 sows in a 50 × 100 m2 cornfield (250 m2/sow), it appears that the animals should be rotated to other grazing areas to protect the cover crop after at least five days. In agricultural technology, most of the research using machine and deep learning is related to the detection of fruits and pests, and research on other application fields is needed. In addition, large-scale image data collected by experts in the field are required as training data to apply deep learning. If the data required for deep learning is insufficient, a large number of data augmentation is required.