• Title/Summary/Keyword: single camera

Search Result 776, Processing Time 0.031 seconds

Relation between Black Hole Mass and Bulge in Hard X-ray selected Type 1 AGNs

  • Son, Suyeon;Kim, Minjin;Barth, Aaron J.;Ho, Luis C.
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.45 no.1
    • /
    • pp.62.1-62.1
    • /
    • 2020
  • We present a scaling relation between black hole (BH) mass and bulge luminosity for 35 nearby (z<0.1) type 1 active galaxies, selected from the 70-month Swift-BAT X-ray source catalog. Thanks to the unbiased selection and proximity of the parent sample, our sample is suitable to study the physical connection between central black holes and host galaxies. We use the F814W images obtained with the Advanced Camera for Surveys on Hubble Space Telescope, to perform the imaging decomposition with GALFIT. With a careful treatment on the PSF model, we measure the I-band bulge brightness robustly. In combination with the BH mass estimated from a single-epoch spectroscopic data, we present the correlation between BH mass and bulge luminosity of the target AGNs. We demonstrate that our sample marginally lies off from the M(BH)-L(bulge) relation of inactive galaxies. We discuss possible physical origins of this discrepancy. Finally, we present how the relation depends on the photometric properties of AGNs and host galaxies, which may provide an useful insight on the co-evolution between BHs and host galaxies.

  • PDF

Performance evaluation of the 76 cm telescope at Kyung Hee Astronomical Observatory (KHAO)

  • Ji, Tae-Geun;Han, Jimin;Ahn, Hojae;Lee, Sumin;Kim, Dohoon;Kim, Kyung Tae;Im, Myungshin;Pak, Soojong
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.1
    • /
    • pp.49.3-49.3
    • /
    • 2021
  • The 76 cm telescope in Kyung Hee Astronomical Observatory is participating in the small telescope network of the SomangNet project, which started in 2020. Since the installation of the telescope in 1992, the system configuration has been changed several times. The optical system of this telescope has a Ritchey-Chrétien configuration with 76 cm in diameter and the focal ratio is f/7. The mount is a single fork equatorial type and its control system is operated by TheSkyX software. We use a science camera with a 4k × 4k CCD and standard Johnson-Cousins UBVRI filters, which cover a field of view of 23.7 × 23.7 arcmin. We are also developing the Kyung Hee Automatic Observing Software for the 76 cm telescope (KAOS76) for efficient operations. In this work, we present the standard star calibration results, the current status of the system, and the expected science capabilities.

  • PDF

Optical telescope with spectro-polarimetric camera on the moon

  • KIM, Ilhoon;HONG, Sukbum;KIM, Joohyun;Seo, Haingja;Kim, Jeong hyun;Choi, Hwajin
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.46 no.2
    • /
    • pp.78.1-78.1
    • /
    • 2021
  • A Lunar observatory not only provides ideas and experiences for space settlements from the Moon to Mars, but also puts the telescope in an optimal position to compete with space telescopes. Earth observation on the Moon's surface has the advantage of no atmospheric scattering or light pollution and is a stable fuel-free observation platform, allowing all longitude and latitude of the Earth to be observed for a month. Observing the entire globe with a single observation instrument, which has never been attempted before, and calculating the global albedo will significantly help predict the weather and climate change. Spectropolarimetric observations can reveal the physical and chemical properties of the Earth's atmosphere, track the global distribution and migration path of aerosols and air pollutants, and can also help detect very small space debris of which the risk has increased recently. In addition, the zodiacal light, which is difficult to observe from Earth, is very easy to observe from the lunar observatory, so it will be an opportunity to reveal the origin of the solar system and take a step closer to understanding the exoplanet system. In conclusion, building and developing a lunar observatory will be a groundbreaking study to become the world's leader that we have never tried before as a first step in expanding human experience and intelligence.

  • PDF

Robust 3D Object Detection through Distance based Adaptive Thresholding (거리 기반 적응형 임계값을 활용한 강건한 3차원 물체 탐지)

  • Eunho Lee;Minwoo Jung;Jongho Kim;Kyongsu Yi;Ayoung Kim
    • The Journal of Korea Robotics Society
    • /
    • v.19 no.1
    • /
    • pp.106-116
    • /
    • 2024
  • Ensuring robust 3D object detection is a core challenge for autonomous driving systems operating in urban environments. To tackle this issue, various 3D representation, including point cloud, voxels, and pillars, have been widely adopted, making use of LiDAR, Camera, and Radar sensors. These representations improved 3D object detection performance, but real-world urban scenarios with unexpected situations can still lead to numerous false positives, posing a challenge for robust 3D models. This paper presents a post-processing algorithm that dynamically adjusts object detection thresholds based on the distance from the ego-vehicle. While conventional perception algorithms typically employ a single threshold in post-processing, 3D models perform well in detecting nearby objects but may exhibit suboptimal performance for distant ones. The proposed algorithm tackles this issue by employing adaptive thresholds based on the distance from the ego-vehicle, minimizing false negatives and reducing false positives in the 3D model. The results show performance enhancements in the 3D model across a range of scenarios, encompassing not only typical urban road conditions but also scenarios involving adverse weather conditions.

Development of an Alignment Method for Retarders in isoSTED Microscopy

  • Ilkyu Park;Dong-Ryoung Lee
    • Current Optics and Photonics
    • /
    • v.8 no.4
    • /
    • pp.421-426
    • /
    • 2024
  • The use of stimulated emission depletion (STED) microscopy has significantly improved resolution beyond the limits imposed by diffraction; Furthermore, STED microscopy adopts a 4Pi-geometry to achieve an isotropic improvement in resolution. In isoSTED microscopy, a polarizing beam splitter and retarders are used in a 4Pi cavity to split beams of identical power, generating constructive and destructive interference for lateral and axial resolution improvements, respectively. The precise alignment of the retarders is crucial for optimizing the performance of isoSTED microscopy, because this orientation affects the quality of the depletion focus, necessitating zero intensity at the center. Incomplete destructive interference can lead to unwanted fluorescence inhibition, resulting in degraded resolution and contrast. However, measuring the intensity and polarization state in each optical path of the 4Pi cavity is complex and requires additional devices such as a power meter. Here, we propose a simple and accurate alignment method for the 4Pi cavity in isoSTED microscopy. Our approach demonstrates the equal allocation of power between upper and lower beam paths and achieves complete destructive interference using a polarizing beam displacer and a single CCD camera positioned outside the 4Pi cavity.

An implementation of 2D/3D Complex Optical System and its Algorithm for High Speed, Precision Solder Paste Vision Inspection (솔더 페이스트의 고속, 고정밀 검사를 위한 이차원/삼차원 복합 광학계 및 알고리즘 구현)

  • 조상현;최흥문
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.3
    • /
    • pp.139-146
    • /
    • 2004
  • A 2D/3D complex optical system and its vision inspection algerian is proposed and implemented as a single probe system for high speed, precise vision inspection of the solder pastes. One pass un length labeling algorithm is proposed instead of the conventional two pass labeling algorithm for fast extraction of the 2D shape of the solder paste image from the recent line-scan camera as well as the conventional area-scan camera, and the optical probe path generation is also proposed for the efficient 2D/3D inspection. The Moire interferometry-based phase shift algerian and its optical system implementation is introduced, instead of the conventional laser slit-beam method, for the high precision 3D vision inspection. All of the time-critical algorithms are MMX SIMD parallel-coded for further speedup. The proposed system is implemented for simultaneous 2D/3D inspection of 10mm${\times}$10mm FOV with resolutions of 10 ${\mu}{\textrm}{m}$ for both x, y axis and 1 ${\mu}{\textrm}{m}$ for z axis. Experiments conducted on several nBs show that the 2D/3D inspection of an FOV, excluding an image capturing, results in high speed of about 0.011sec/0.01sec, respectively, after image capturing, with $\pm$1${\mu}{\textrm}{m}$ height accuracy.

An Implementation of Gaze Recognition System Based on SVM (SVM 기반의 시선 인식 시스템의 구현)

  • Lee, Kue-Bum;Kim, Dong-Ju;Hong, Kwang-Seok
    • The KIPS Transactions:PartB
    • /
    • v.17B no.1
    • /
    • pp.1-8
    • /
    • 2010
  • The researches about gaze recognition which current user gazes and finds the location have increasingly developed to have many application. The gaze recognition of existence all about researches have got problems because of using equipment that Infrared(IR) LED, IR camera and head-mounted of high price. This study propose and implement the gaze recognition system based on SVM using a single PC Web camera. The proposed system that divide the gaze location of 36 per 9 and 4 to recognize gaze location of 4 direction and 9 direction recognize user's gaze. Also, the proposed system had apply on image filtering method using difference image entropy to improve performance of gaze recognition. The propose system was implements experiments on the comparison of proposed difference image entropy gaze recognition system, gaze recognition system using eye corner and eye's center and gaze recognition system based on PCA to evaluate performance of proposed system. The experimental results, recognition rate of 4 direction was 94.42% and 9 direction was 81.33% for the gaze recognition system based on proposed SVM. 4 direction was 95.37% and 9 direction was 82.25%, when image filtering method using difference image entropy implemented. The experimental results proved the high performance better than existed gaze recognition system.

Methodology for Vehicle Trajectory Detection Using Long Distance Image Tracking (원거리 차량 추적 감지 방법)

  • Oh, Ju-Taek;Min, Joon-Young;Heo, Byung-Do
    • International Journal of Highway Engineering
    • /
    • v.10 no.2
    • /
    • pp.159-166
    • /
    • 2008
  • Video image processing systems (VIPS) offer numerous benefits to transportation models and applications, due to their ability to monitor traffic in real time. VIPS based on a wide-area detection algorithm provide traffic parameters such as flow and velocity as well as occupancy and density. However, most current commercial VIPS utilize a tripwire detection algorithm that examines image intensity changes in the detection regions to indicate vehicle presence and passage, i.e., they do not identify individual vehicles as unique targets. If VIPS are developed to track individual vehicles and thus trace vehicle trajectories, many existing transportation models will benefit from more detailed information of individual vehicles. Furthermore, additional information obtained from the vehicle trajectories will improve incident detection by identifying lane change maneuvers and acceleration/deceleration patterns. However, unlike human vision, VIPS cameras have difficulty in recognizing vehicle movements over a detection zone longer than 100 meters. Over such a distance, the camera operators need to zoom in to recognize objects. As a result, vehicle tracking with a single camera is limited to detection zones under 100m. This paper develops a methodology capable of monitoring individual vehicle trajectories based on image processing. To improve traffic flow surveillance, a long distance tracking algorithm for use over 200m is developed with multi-closed circuit television (CCTV) cameras. The algorithm is capable of recognizing individual vehicle maneuvers and increasing the effectiveness of incident detection.

  • PDF

Gamma Ray Detection Processing in PET/CT scanner (PET/CT 장치의 감마선 검출과정)

  • Park, Soung-Ock;Ahn, Sung-Min
    • Journal of radiological science and technology
    • /
    • v.29 no.3
    • /
    • pp.125-132
    • /
    • 2006
  • The PET/CT scanner is an evolution in image technology. The two modalities are complementary with CT and PET images. The PET scan images are well known as low resolution anatomic landmak, but such problems may help with interpretation detailed anatomic framework such as that provided by CT scan. PET/CT offers some advantages-improved lesion localization and identification, more accurate tumor staging. etc. Conventional PET employs tranmission scan require around 4 min./bed position and 30 min. for whole body scan. But PET/CT scanner can reduced by 50% in whole body scan. Especially nowadays PET scanner LSO scintillator-based from BGO without septa and operate in 3-D acquisition mode with multidetectors CT. PET/CT scanner fusion problems solved through hardware rather than software. Such device provides with the capability to acquire accurately aligned anatomic and functional images from single scan. It is very important to effective detection from gamma ray source in PETdetector. And can be offer high quality diagnostic images. So we have study about detection processing of PET detector and high quality imaging process.

  • PDF

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.