• Title/Summary/Keyword: deep color

Search Result 566, Processing Time 0.024 seconds

Pill Identification Algorithm Based on Deep Learning Using Imprinted Text Feature (음각 정보를 이용한 딥러닝 기반의 알약 식별 알고리즘 연구)

  • Seon Min, Lee;Young Jae, Kim;Kwang Gi, Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.43 no.6
    • /
    • pp.441-447
    • /
    • 2022
  • In this paper, we propose a pill identification model using engraved text feature and image feature such as shape and color, and compare it with an identification model that does not use engraved text feature to verify the possibility of improving identification performance by improving recognition rate of the engraved text. The data consisted of 100 classes and used 10 images per class. The engraved text feature was acquired through Keras OCR based on deep learning and 1D CNN, and the image feature was acquired through 2D CNN. According to the identification results, the accuracy of the text recognition model was 90%. The accuracy of the comparative model and the proposed model was 91.9% and 97.6%. The accuracy, precision, recall, and F1-score of the proposed model were better than those of the comparative model in terms of statistical significance. As a result, we confirmed that the expansion of the range of feature improved the performance of the identification model.

Depth Map Completion using Nearest Neighbor Kernel (최근접 이웃 커널을 이용한 깊이 영상 완성 기술)

  • Taehyun, Jeong;Kutub, Uddin;Byung Tae, Oh
    • Journal of Broadcast Engineering
    • /
    • v.27 no.6
    • /
    • pp.906-913
    • /
    • 2022
  • In this paper, we propose a new deep network architecture using nearest neighbor kernel for the estimation of dense depth map from its sparse map and corresponding color information. First, we propose to decompose the depth map signal into the structure and details for easier prediction. We then propose two separate subnetworks for prediction of both structure and details using classification and regression approaches, respectively. Moreover, the nearest neighboring kernel method has been newly proposed for accurate prediction of structure signal. As a result, the proposed method showed better results than other methods quantitatively and qualitatively.

The faintest quasar luminosity function at z ~ 5 from Deep Learning and Bayesian Inference

  • Shin, Suhyun;Im, Myungshin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.31.2-31.2
    • /
    • 2021
  • To estimate the contribution of quasars on keeping the IGM ionized, building a quasar luminosity function (LF) is necessary. Quasar LFs derived from multiple quasar surveys, however, are incompatible, especially for the faint regime, emphasizing the need for deep images. In this study, we construct quasar LF reaching M1450~-21.5 AB magnitude at z ~ 5, which is 1.5 mag deeper than previously reported LFs, using deep images from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). We trained an artificial neural network (ANN) by inserting the colors as inputs to classify the quasars at z ~ 5 from the late-type stars and low-redshift galaxies. The accuracy of ANN is > 99 %. We also adopted the Bayesian information criterion to elaborate on the quasar-like objects. As a result, we recovered 5/5 confirmed quasars and remarkably minimized the contamination rate of high-redshift galaxies by up to six times compared to the selection using color selection alone. The constructed quasar parametric LF shows a flatter faint-end slope α=-127+0.16-0.15 similar to the recent LFs. The number of faint quasars (M1450 < -23.5) is too few to be the main contributor to IGM ionizing photons.

  • PDF

Counterfactual image generation by disentangling data attributes with deep generative models

  • Jieon Lim;Weonyoung Joo
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.6
    • /
    • pp.589-603
    • /
    • 2023
  • Deep generative models target to infer the underlying true data distribution, and it leads to a huge success in generating fake-but-realistic data. Regarding such a perspective, the data attributes can be a crucial factor in the data generation process since non-existent counterfactual samples can be generated by altering certain factors. For example, we can generate new portrait images by flipping the gender attribute or altering the hair color attributes. This paper proposes counterfactual disentangled variational autoencoder generative adversarial networks (CDVAE-GAN), specialized for data attribute level counterfactual data generation. The structure of the proposed CDVAE-GAN consists of variational autoencoders and generative adversarial networks. Specifically, we adopt a Gaussian variational autoencoder to extract low-dimensional disentangled data features and auxiliary Bernoulli latent variables to model the data attributes separately. Also, we utilize a generative adversarial network to generate data with high fidelity. By enjoying the benefits of the variational autoencoder with the additional Bernoulli latent variables and the generative adversarial network, the proposed CDVAE-GAN can control the data attributes, and it enables producing counterfactual data. Our experimental result on the CelebA dataset qualitatively shows that the generated samples from CDVAE-GAN are realistic. Also, the quantitative results support that the proposed model can produce data that can deceive other machine learning classifiers with the altered data attributes.

Surface Water Mapping of Remote Sensing Data Using Pre-Trained Fully Convolutional Network

  • Song, Ah Ram;Jung, Min Young;Kim, Yong Il
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.36 no.5
    • /
    • pp.423-432
    • /
    • 2018
  • Surface water mapping has been widely used in various remote sensing applications. Water indices have been commonly used to distinguish water bodies from land; however, determining the optimal threshold and discriminating water bodies from similar objects such as shadows and snow is difficult. Deep learning algorithms have greatly advanced image segmentation and classification. In particular, FCN (Fully Convolutional Network) is state-of-the-art in per-pixel image segmentation and are used in most benchmarks such as PASCAL VOC2012 and Microsoft COCO (Common Objects in Context). However, these data sets are designed for daily scenarios and a few studies have conducted on applications of FCN using large scale remotely sensed data set. This paper aims to fine-tune the pre-trained FCN network using the CRMS (Coastwide Reference Monitoring System) data set for surface water mapping. The CRMS provides color infrared aerial photos and ground truth maps for the monitoring and restoration of wetlands in Louisiana, USA. To effectively learn the characteristics of surface water, we used pre-trained the DeepWaterMap network, which classifies water, land, snow, ice, clouds, and shadows using Landsat satellite images. Furthermore, the DeepWaterMap network was fine-tuned for the CRMS data set using two classes: water and land. The fine-tuned network finally classifies surface water without any additional learning process. The experimental results show that the proposed method enables high-quality surface mapping from CRMS data set and show the suitability of pre-trained FCN networks using remote sensing data for surface water mapping.

DEEP-South: The Progress and the Plans of the First Year

  • Moon, Hong-Kyu;Kim, Myung-Jin;Roh, Dong-Goo;Park, Jintae;Yim, Hong-Suh;Lee, Hee-Jae;Choi, Young-Jun;Oh, Young-Seok;Bae, Young-Ho
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.41 no.2
    • /
    • pp.48.2-48.2
    • /
    • 2016
  • The wide-field and the round-the clock operation capabilities of the KMTNet enables the discovery, astrometry and follow-up physical characterization of asteroids and comets in a most efficient way. We collectively refer to the team members, partner organizations, the dedicated software subsystem, the computing facility and research activities as Deep Ecliptic Patrol of the Southern Sky (DEEP-South). Most of the telescope time for DEEP-South is devoted to targeted photometry of Near Earth Asteroids (NEAs) to push up the number of the population with known physical properties from several percent to several dozens of percent, in the long run. We primarily adopt Johnson R-band for lightcurve study, while we employ BVI filters for taxonomic classification and detection of any possible color variations of an object at the same time. In this presentation, the progress and new findings since the last KAS meeting will be outlined. We report DEEP-South preliminary lightcurves of several dozens of NEAs obtained at three KMTNet stations during the first year runs. We also present a physical model of asteroid (5247) Krylov, the very first Non principal Axis (NPA) rotator that has been confirmed in the main belt (MB). A new asteroid taxonomic classification scheme will be introduced with an emphasis on its utility in the LSST era. The progress on the current version of automated mover detection software will also be summarized.

  • PDF

Current Status of KMTNet/DEEP-South Collaboration Research for Comets and Asteroids Research between SNU and KASI

  • BACH, Yoonsoo P.;YANG, Hongu;KWON, Yuna G.;LEE, Subin;KIM, Myung-Jin;CHOI, Young-Jun;Park, Jintae;ISHIGURO, Masateru;Moon, Hong-Kyu
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.42 no.2
    • /
    • pp.82.2-82.2
    • /
    • 2017
  • Korea Microlensing Telescope Network (KMTNet) is one of powerful tools for investigating primordial objects in the inner solar system in that it covers a large area of the sky ($2{\times}2$ degree2) with a high observational cadence. The Deep Ecliptic Patrol of the Southern sky (DEEP-South) survey has been scanning the southern sky using KMTNet for non-bulge time (45 full nights per year) [1] since 2015 for examining color, albedo, rotation, and shape of the solar system bodies. Since 2017 January, we have launched a new collaborative group between Korea Astronomy and Space Science Institute (KASI) and Seoul National University (SNU) with support from KASI to reinforce mutual collaboration among these institutes and further to enhance human resources development by utilizing the KMTNet/DEEP-South data. In particular, we focus on the detection of comets and asteroids spontaneously scanned in the DEEP-South for (1) investigating the secular changes in comet's activities and (2) analyzing precovery and recovery images of objects in the NASA's NEOWISE survey region. In this presentation, we will describe our scientific objectives and current status on using KMTNet data, which includes updating the accuracy of the world coordinate system (WCS) information, finding algorithm of solar system bodies in the image, and doing non-sidereal photometry.

  • PDF

Tomato Crop Disease Classification Using an Ensemble Approach Based on a Deep Neural Network (심층 신경망 기반의 앙상블 방식을 이용한 토마토 작물의 질병 식별)

  • Kim, Min-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.10
    • /
    • pp.1250-1257
    • /
    • 2020
  • The early detection of diseases is important in agriculture because diseases are major threats of reducing crop yield for farmers. The shape and color of plant leaf are changed differently according to the disease. So we can detect and estimate the disease by inspecting the visual feature in leaf. This study presents a vision-based leaf classification method for detecting the diseases of tomato crop. ResNet-50 model was used to extract the visual feature in leaf and classify the disease of tomato crop, since the model showed the higher accuracy than the other ResNet models with different depths. We propose a new ensemble approach using several DCNN classifiers that have the same structure but have been trained at different ranges in the DCNN layers. Experimental result achieved accuracy of 97.19% for PlantVillage dataset. It validates that the proposed method effectively classify the disease of tomato crop.

Deep red electrophosphorescent organic light-emitting diodes based on new iridium complexes

  • Gong, Doo-Won;Kim, Jun-Ho;Lee, Kum-Hee;Yoon, Seung-Soo;Kim, Young-Kwan
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 2006.08a
    • /
    • pp.1075-1078
    • /
    • 2006
  • New iridium complex was synthesized and demonstrated a deep red light emission in organic light-emitting diodes (OLEDs). The maximum luminance of 8320 cd/m2 at 15 V and the luminance efficiency of 2.5 cd/A at 20 mA/cm2 were achieved. The peak wavelength of the electroluminescence was at 626 nm with the CIE coordinates of (0.69, 0.30), and the device also showed a stable color chromaticity with various voltages.

  • PDF

Synthesis and Characterization of 1,2,4-Oxadiazole-Based Deep-Blue and Blue Color Emitting Polymers

  • Agneeswari, Rajalingam;Tamilavan, Vellaiappillai;Hyun, Myung Ho
    • Bulletin of the Korean Chemical Society
    • /
    • v.35 no.2
    • /
    • pp.513-517
    • /
    • 2014
  • Two donor-acceptor-donor monomers such as 3,5-bis(4-bromophenyl)-1,2,4-oxadiazole (BOB) and 3,5-bis(5-bromothiophen-2-yl)-1,2,4-oxadiazole (TOT) incorporating electron transporting and hole blocking 1,2,4-oxadiazloe moiety were copolymerized with light emitting fluorene derivative via Suzuki polycondensation to afford two new polymers, PFBOB and PFTOT, respectively. The optical studies for polymers PFBOB and PFTOT revealed that the band gaps are 3.10 eV and 2.72 eV, respectively, and polymer PFBOB exhibited a deep-blue emission while polymer PFTOT showed blue emission in chloroform and as thin film. The photoluminescence quantum efficiencies (${\Phi}_f$) of polymers PFBOB and PFTOT in chloroform calculated against highly blue emitting 9,10-diphenylanthracene (DPA, ${\Phi}_f=0.90$) were 1.00 and 0.44, respectively.