• Title/Summary/Keyword: Sky Image

Search Result 145, Processing Time 0.023 seconds

Analysis of Observation Environment with Sky Line and Skyview Factor using Digital Elevation Model (DEM), 3-Dimensional Camera Image and Radiative Transfer Model at Radiation Site, Gangneung-Wonju National University (수치표고모델, 3차원 카메라이미지자료 및 복사모델을 이용한 Sky Line과 Skyview Factor에 따른 강릉원주대학교 복사관측소 관측환경 분석)

  • Jee, Joon-Bum;Zo, Il-Sung;Kim, Bu-Yo;Lee, Kyu-Tae;Jang, Jeong-Pil
    • Atmosphere
    • /
    • v.29 no.1
    • /
    • pp.61-74
    • /
    • 2019
  • To investigate the observational environment, sky line and skyview factor (SVF) are calculated using a digital elevation model (DEM; 10 m spatial resolution) and 3 dimensional (3D) sky image at radiation site, Gangneung-Wonju National University (GWNU). Solar radiation is calculated using GWNU solar radiation model with and without the sky line and the SVF retrieved from the 3D sky image and DEM. When compared with the maximum sky line elevation from Skyview, the result from 3D camera is higher by $3^{\circ}$ and that from DEM is lower by $7^{\circ}$. The SVF calculated from 3D camera, DEM and Skyview is 0.991, 0.998, and 0.993, respectively. When the solar path is analyzed using astronomical solar map with time, the sky line by 3D camera shield the direct solar radiation up to $14^{\circ}$ with solar altitude at winter solstice. The solar radiation is calculated with minutely, and monthly and annual accumulated using the GWNU model. During the summer and winter solstice, the GWNU radiation site is shielded from direct solar radiation by the west mountain 40 and 60 minutes before sunset, respectively. The monthly difference between plane and real surface is up to $29.18M\;m^{-2}$ with 3D camera in November, while that with DEM is $4.87M\;m^{-2}$ in January. The difference in the annual accumulated solar radiation is $208.50M\;m^{-2}$ (2.65%) and $47.96M\;m^{-2}$ (0.63%) with direct solar radiation and $30.93M\;m^{-2}$ (0.58%) and $3.84M\;m^{-2}$ (0.07%) with global solar radiation, respectively.

Merging Features and Optical-NIR Color Gradient of Early-type Galaxies

  • Kim, Du-Ho;Im, Myeong-Sin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.35 no.2
    • /
    • pp.41.1-41.1
    • /
    • 2010
  • It has been suggested that merging plays an important role in the formation and the evolution of early-type galaxies. Optical-NIR color gradients of early-type galaxies in high density environments are found to be less steep than those in low density environment, hinting frequent merger activities in early-type galaxies in high density environment. In order to confirm if the flat color gradient is the result of dry merger, we decided to look deeply to find merging features and get their relation with color gradient. We selected samples which show extreme values of optical-NIR color gradients based on the data of previous study, and observed them at Maidanak observatory 1.5m telescope with long exposure. After masking out overlaid sources, our analysis reveals that these galaxies do not have extreme color gradient values. High degree sky flat technique was used during observation to aid discovery of faint, extended features. However, flatness of detector (SNUCAM) was good enough, so we could not see any marked improvement in image quality compared to those using normal sky flats. Additionally we noticed a feature that looks like merging tidal tail in the CFHT archival image, but this does not show up on the image we obtained. This demonstrates that flatness and correct sky estimation is very important when we look for faint merging features. In future we plan to enlarge the number of the sample.

  • PDF

Accuracy of the Point-Based Image Registration Method in Integrating Radiographic and Optical Scan Images: A Pilot Study

  • Mai, Hai Yen;Lee, Du-Hyeong
    • Journal of Korean Dental Science
    • /
    • v.13 no.1
    • /
    • pp.28-34
    • /
    • 2020
  • Purpose: The purpose of this study was to investigate the influence of different implant computer software on the accuracy of image registration between radiographic and optical scan data. Materials and Methods: Cone-beam computed tomography and optical scan data of a partially edentulous jaw were collected and transferred to three different computer softwares: Blue Sky Plan (Blue Sky Bio), Implant Studio (3M Shape), and Geomagic DesignX (3D systems). In each software, the two image sets were aligned using a point-based automatic image registration algorithm. Image matching error was evaluated by measuring the linear discrepancies between the two images at the anterior and posterior area in the direction of the x-, y-, and z-axes. Kruskal-Wallis test and a post hoc Mann-Whitney U-test with Bonferroni correction were used for statistical analyses. The significance level was set at 0.05. Result: Overall discrepancy values ranged from 0.08 to 0.30 ㎛. The image registration accuracy among the software was significantly different in the x- and z-axes (P=0.009 and <0.001, respectively), but not different in the y-axis (P=0.064). Conclusion: The image registration accuracy performed by a point-based automatic image matching could be different depending on the computer software used.

DEVELOPMENT OF WIDE-FIELD IMAGING CAMERA FOR ZODIACAL LIGHT OBSERVATION

  • KWON S. M.;HONG S. S.;SHIN K. J.
    • Journal of The Korean Astronomical Society
    • /
    • v.37 no.4
    • /
    • pp.179-184
    • /
    • 2004
  • We have developed a wide-field imaging camera system, called WICZO, to monitor light of the night sky over extended period. Such monitoring is necessary for studying the morphology of interplanetary dust cloud and also the time and spatial variations of airglow emission. The system consists of an electric cooler a CCD camera with $60\%$ quantum efficiency at 500nm, and a fish-eye lens with $180^{\circ}$ field of view. Wide field imaging is highly desired in light of the night sky observations in general, because the zodiacal light and the airglow emission extend over the entire sky. This paper illustrates the design of WICZO, reports the result of its laboratory performance test, and presents the first night sky image, which was taken, under collaboration with Byulmaro Observatory, on top of Mt. Bongrae at Yongweol in January, 2004.

Effect of All Sky Image Correction on Observations in Automatic Cloud Observation (자동 운량 관측에서 전천 영상 보정이 관측치에 미치는 효과)

  • Yun, Han-Kyung
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.15 no.2
    • /
    • pp.103-108
    • /
    • 2022
  • Various studies have been conducted on cloud observation using all-sky images acquired with a wide-angle camera system since the early 21st century, but it is judged that an automatic observation system that can completely replace the eye observation has not been obtained. In this study, to verify the quantification of cloud observation, which is the final step of the algorithm proposed to automate the observation, the cloud distribution of the all-sky image and the corrected image were compared and analyzed. The reason is that clouds are formed at a certain height depending on the type, but like the retina image, the center of the lens is enlarged and the edges are reduced, but the effect of human learning ability and spatial awareness on cloud observation is unknown. As a result of this study, the average cloud observation error of the all-sky image and the corrected image was 1.23%. Therefore, when compared with the eye observation in the decile, the error due to correction is 1.23% of the observed amount, which is very less than the allowable error of the eye observation, and it does not include human error, so it is possible to collect accurately quantified data. Since the change in cloudiness due to the correction is insignificant, it was confirmed that accurate observations can be obtained even by omitting the unnecessary correction step and observing the cloudiness in the pre-correction image.

FPGA-Based Real-Time Multi-Scale Infrared Target Detection on Sky Background

  • Kim, Hun-Ki;Jang, Kyung-Hyun
    • Journal of the Korea Society of Computer and Information
    • /
    • v.21 no.11
    • /
    • pp.31-38
    • /
    • 2016
  • In this paper, we propose multi-scale infrared target detection algorithm with varied filter size using integral image. Filter based target detection is widely used for small target detection, but it doesn't suit for large target detection depending on the filter size. When there are multi-scale targets on the sky background, detection filter with small filter size can not detect the whole shape of the large targe. In contrast, detection filter with large filter size doesn't suit for small target detection, but also it requires a large amount of processing time. The proposed algorithm integrates the filtering results of varied filter size for the detection of small and large targets. The proposed algorithm has good performance for both small and large target detection. Furthermore, the proposed algorithm requires a less processing time, since it use the integral image to make the mean images with different filter sizes for subtraction between the original image and the respective mean image. In addition, we propose the implementation of real-time embedded system using FPGA.

THE AKARI PROJECT: LEGACY AND DATA PROCESSING STATUS

  • NakagawaI, Takao;Yamamura, Issei
    • Publications of The Korean Astronomical Society
    • /
    • v.32 no.1
    • /
    • pp.5-9
    • /
    • 2017
  • This paper provides an overview of the AKARI mission, which was the first Japanese satellite dedicated to infrared astronomy. The AKARI satellite was launched in 2006, and performed both an all-sky survey and pointed observations during its 550 days in the He-cooled mission phases (Phases 1 and 2). After the He ran out, we continued near-infrared observations with mechanical cryocoolers (Phase 3). Due to a failure of its power supply, AKARI was turned off in 2011. The AKARI data are unique in terms of the observed wavelengths as well as the sky coverage, and provide a unique legacy resource for many astronomical studies. Since April 2013, a dedicated new team has been working to refine the AKARI data processing. The goal of this activity is to provide processed datasets for most of the AKARI observations in a Science Ready form, so that more users can utilize the AKARI data in their astronomical research. The data to be released will include revised All-Sky Point Source Catalogues, All-Sky Image Maps, as well as high-sensitivity images and spectra obtained by pointed observations. We expect that the data will be made public by in the Spring of 2016.

Development of A Prototype Device to Capture Day/Night Cloud Images based on Whole-Sky Camera Using the Illumination Data (정밀조도정보를 이용한 전천카메라 기반의 주·야간 구름영상촬영용 원형장치 개발)

  • Lee, Jaewon;Park, Inchun;cho, Jungho;Ki, GyunDo;Kim, Young Chul
    • Atmosphere
    • /
    • v.28 no.3
    • /
    • pp.317-324
    • /
    • 2018
  • In this study, we review the ground-based whole-sky camera (WSC), which is developed to continuously capture day and night cloud images using the illumination data from a precision Lightmeter with a high temporal resolution. The WSC is combined with a precision Lightmeter developed in IYA (International Year of Astronomy) for analysis of an artificial light pollution at night and a DSLR camera equipped with a fish-eye lens widely applied in observational astronomy. The WSC is designed to adjust the shutter speed and ISO of the equipped camera according to illumination data in order to stably capture cloud images. And Raspberry Pi is applied to control automatically the related process of taking cloud and sky images every minute under various conditions depending on illumination data from Lightmeter for 24 hours. In addition, it is utilized to post-process and store the cloud images and to upload the data to web page in real time. Finally, we check the technical possibility of the method to observe the cloud distribution (cover, type, height) quantitatively and objectively by the optical system, through analysis of the captured cloud images from the developed device.

CONSTRAINING COSMOLOGICAL PARAMETERS WITH IMAGE SEPARATION STATISTICS OF GRAVITATIONALLY LENSED SDSS QUASARS: MEAN IMAGE SEPARATION AND LIKELIHOOD INCORPORATING LENS GALAXY BRIGHTNESS

  • Han, Du-Hwan;Park, Myeong-Gu
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.1
    • /
    • pp.83-92
    • /
    • 2015
  • Recent large scale surveys such as Sloan Digital Sky Survey have produced homogeneous samples of multiple-image gravitationally lensed quasars with well-defined selection effects. Statistical analysis on these can yield independent constraints on cosmological parameters. Here we use the image separation statistics of lensed quasars from Sloan Digital Sky Survey Quasar Lens Search (SQLS) to derive constraints on cosmological parameters. Our analysis does not require knowledge of the magnification bias, which can only be estimated from the detailed knowledge on the quasar luminosity function at all redshifts, and includes the consideration for the bias against small image separation quasars due to selection against faint lens galaxy in the follow-up observations for confirmation. We first use the mean image separation of the lensed quasars as a function of redshift to find that cosmological models with extreme curvature are inconsistent with observed lensed quasars. We then apply the maximum likelihood test to the statistical sample of 16 lensed quasars that have both measured redshift and magnitude of lens galaxy. The likelihood incorporates the probability that the observed image separation is realized given the luminosity of the lens galaxy in the same manner as Im et al. (1997). We find that the 95% confidence range for the cosmological constant (i.e., the vacuum energy density) is $0.72{\leq}{\Omega}_{\Lambda}{\leq}1.0$ for a flat universe. We also find that the equation of state parameter can be consistent with -1 as long as the matter density ${\Omega}_m{\leq}0.4$ (95% confidence range). We conclude that the image separation statistics incorporating the brightness of lens galaxies can provide robust constraints on the cosmological parameters.

Implementation of Virtual Maritime Environment for LWIR Homing Missile Test (원적외선 호밍 유도탄 시험을 위한 가상 해상 환경의 구현)

  • Park, Hyeryeong
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.19 no.2
    • /
    • pp.185-194
    • /
    • 2016
  • It is essential for generating the synthetic image to test and evaluate a guided missile system in the hardware-in-the-loop simulation. In order to make the evaluation results to be more reliable, the extent of fidelity and rendering performance of the synthetic image cannot be left ignored. There are numerous challenges to simulate the LWIR sensor signature of sea surface depending on the incident angle, especially in the maritime environment. In this paper, we investigate the key factors in determining the apparent temperature of sea surface and propose the approximate formula consisting of optical characteristics of sea surface and sky radiance. We find that the greater the incident angle increases, the larger the reflectivity of sea surface, and the greater the water vapor concentration in atmosphere increases, the larger the amount of sky radiance. On the basis of this information, we generate the virtual maritime environment in LWIR region using the SE-WORKBENCH, physically based rendering software. The margin of error is under seven percentage points.