• Title/Summary/Keyword: camera image

Search Result 4,922, Processing Time 0.031 seconds

Suppression of Moiré Fringes Using Hollow Glass Microspheres for LED Screen (중공 미소 유리구를 이용한 LED 스크린 모아레 억제)

  • Songeun Hong;Jeongpil Na;Mose Jung;Gieun Kim;Jongwoon Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.28-35
    • /
    • 2023
  • Moiré patterns emerge due to the interference between the non-emission area of the LED screen and the grid line in an image sensor of a video recording device when taking a video in the presence of the LED screen. To reduce the moiré intensity, we have fabricated an anti-moiré filter using hollow glass microspheres (HGMs) by slot-die coating. The LED screen has a large non-emission area because of a large pitch (distance between LED chips), causing more severe moiré phenomenon, compared with a display panel having a very narrow black matrix (BM). It is shown that HGMs diffuse light in such a way that the periodicity of the screen is broken and thus the moiré intensity weakens. To quantitatively analyze its moiré suppression capability, we have calculated the spatial frequencies of the moiré fringes using fast Fourier transform. It is addressed that the moiré phenomenon is suppressed and thus the amplitude of each discrete spatial frequency term is reduced as the HGM concentration is increased. Using the filter with the HGM concentration of 9 wt%, the moiré fringes appeared depending sensitively on the distance between the LED screen and the camera are almost completely removed and the visibility of a nature image is enhanced at a sacrifice of luminance.

  • PDF

The nasoalveolar molding technique versus DynaCleft nasal elevator application in infants with unilateral cleft lip and palate

  • Abdallah Bahaa;Nada El-Bagoury;Noura Khaled;Sameera Mohamed;Ahmed Bahaa;Ahmed Mohamed Ibrahim;Khaled Mohamad Taha;Mohsena Ahmad Abdarrazik
    • Archives of Craniofacial Surgery
    • /
    • v.25 no.3
    • /
    • pp.123-132
    • /
    • 2024
  • Background: The introduction of presurgical nasoalveolar molding represented a significant departure from traditional molding methods. Developed by Grayson and colleagues in 1993, this technique combines an intraoral molding device with a nasal molding stent. This study aimed to compare the Grayson nasoalveolar molding appliance versus DynaCleft appliance as two methods of presurgical nasoalveolar molding. Methods: A single-blinded, randomized, parallel-arm clinical trial was conducted. Sixteen infants with complete unilateral cleft lip and palate were enrolled and divided into two groups of eight. Group 1 was treated with a modified Grayson nasoalveolar molding appliance that included a nasal stent, while group 2 was treated with DynaCleft elastic adhesive tape and an external nasal elevator. Standardized digital photographs of each infant were taken at baseline and post-treatment using a professional camera. Nine extraoral anthropometric measurements were obtained from each image using image measurement software. Results: The modified Grayson nasoalveolar appliance demonstrated a more significant improvement compared to DynaCleft in terms of alar length projection (on both sides), columella angle, and nasal tip projection. Symmetry ratios also showed enhancement, with significant improvements observed in nasal width, nasal basal width, and alar length projection (p< 0.05). Conclusion: Both the modified Grayson nasoalveolar appliance and DynaCleft appear to be effective presurgical infant orthopedics treatment options, demonstrating improvements in nasolabial aesthetics. The modified Grayson appliance, equipped with a nasal stent, improved nasal symmetry more effectively than DynaCleft, resulting in a straighter columella and a more medially positioned nasal tip.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Design of a Mapping Framework on Image Correction and Point Cloud Data for Spatial Reconstruction of Digital Twin with an Autonomous Surface Vehicle (무인수상선의 디지털 트윈 공간 재구성을 위한 이미지 보정 및 점군데이터 간의 매핑 프레임워크 설계)

  • Suhyeon Heo;Minju Kang;Jinwoo Choi;Jeonghong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.3
    • /
    • pp.143-151
    • /
    • 2024
  • In this study, we present a mapping framework for 3D spatial reconstruction of digital twin model using navigation and perception sensors mounted on an Autonomous Surface Vehicle (ASV). For improving the level of realism of digital twin models, 3D spatial information should be reconstructed as a digitalized spatial model and integrated with the components and system models of the ASV. In particular, for the 3D spatial reconstruction, color and 3D point cloud data which acquired from a camera and a LiDAR sensors corresponding to the navigation information at the specific time are required to map without minimizing the noise. To ensure clear and accurate reconstruction of the acquired data in the proposed mapping framework, a image preprocessing was designed to enhance the brightness of low-light images, and a preprocessing for 3D point cloud data was included to filter out unnecessary data. Subsequently, a point matching process between consecutive 3D point cloud data was conducted using the Generalized Iterative Closest Point (G-ICP) approach, and the color information was mapped with the matched 3D point cloud data. The feasibility of the proposed mapping framework was validated through a field data set acquired from field experiments in a inland water environment, and its results were described.

F-18-FDG Whole Body Scan using Gamma Camera equipped with Ultra High Energy Collimator in Cancer Patients: Comparison with FDG Coincidence PET (종양 환자에서 초고에너지(511 keV) 조준기를 이용한 전신 F-18-FDG 평면 영상: Coincidence 감마카메라 단층 촬영 영상과의 비교)

  • Pai, Moon-Sun;Park, Chan-H.;Joh, Chul-Woo;Yoon, Seok-Nam;Yang, Seung-Dae;Lim, Sang-Moo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.1
    • /
    • pp.65-75
    • /
    • 1999
  • Purpose: The aim of this study is to demonstrate the feasibility of 2-[fluorine-18] fluoro-2-deoxy-D-glucose (F-18-FDG) whole body scan (FDG W/B Scan) using dual-head gamma camera equipped with ultra high energy collimator in patients with various cancers, and compare the results with those of coincidence imaging. Materials and Methods: Phantom studies of planar imaging with ultra high energy and coincidence tomography (FDG CoDe PET) were performed. Fourteen patients with known or suspected malignancy were examined. F-18-FDG whole body scan was performed using dual-head gamma camera with high energy (511 keV) collimators and regional FDG CoDe PET immediately followed it Radiological, clinical follow up and histologic results were correlated with F-18-FDG findings. Results: Planar phantom study showed 13.1 mm spatial resolution at 10 cm with a sensitivity of 2638 cpm/MBq/ml. In coincidence PET, spatial resolution was 7.49 mm and sensitivity was 5351 cpm/MBq/ml. Eight out of 14 patients showed hypermetabolic sites in primary or metastatic tumors in FDG CoDe PET. The lesions showing no hypermetabolic uptake of FDG in both methods were all less than 1 cm except one lesion of 2 cm sized metastatic lymph node. The metastatic lymph nodes of positive FDG uptake were more than 1.5 cm in size or conglomerated lesions of lymph nodes less than 1cm in size. FDG W/B scan showed similar results but had additional false positive and false negative cases. FDG W/B scan could not visualize liver metastasis in one case that showed multiple metastatic sites in FDG CoDe PET. Conclusion: FDG W/B scan with specially designed collimators depicted some cancers and their metastatic sites, although it had a limitation in image quality compared to that of FDG CoDe PET. This study suggests that F-18-FDG positron imaging using dual-head gamma camera is feasible in oncology and helpful if it should be more available by regional distribution of FDG.

  • PDF

The Evaluation of Reconstructed Images in 3D OSEM According to Iteration and Subset Number (3D OSEM 재구성 법에서 반복연산(Iteration) 횟수와 부분집합(Subset) 개수 변경에 따른 영상의 질 평가)

  • Kim, Dong-Seok;Kim, Seong-Hwan;Shim, Dong-Oh;Yoo, Hee-Jae
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.15 no.1
    • /
    • pp.17-24
    • /
    • 2011
  • Purpose: Presently in the nuclear medicine field, the high-speed image reconstruction algorithm like the OSEM algorithm is widely used as the alternative of the filtered back projection method due to the rapid development and application of the digital computer. There is no to relate and if it applies the optimal parameter be clearly determined. In this research, the quality change of the Jaszczak phantom experiment and brain SPECT patient data according to the iteration times and subset number change try to be been put through and analyzed in 3D OSEM reconstruction method of applying 3D beam modeling. Materials and Methods: Patient data from August, 2010 studied and analyzed against 5 patients implementing the brain SPECT until september, 2010 in the nuclear medicine department of ASAN medical center. The phantom image used the mixed Jaszczak phantom equally and obtained the water and 99mTc (500 MBq) in the dual head gamma camera Symbia T2 of Siemens. When reconstructing each image altogether with patient data and phantom data, we changed iteration number as 1, 4, 8, 12, 24 and 30 times and subset number as 2, 4, 8, 16 and 32 times. We reconstructed in reconstructed each image, the variation coefficient for guessing about noise of images and image contrast, FWHM were produced and compared. Results: In patients and phantom experiment data, a contrast and spatial resolution of an image showed the tendency to increase linearly altogether according to the increment of the iteration times and subset number but the variation coefficient did not show the tendency to be improved according to the increase of two parameters. In the comparison according to the scan time, the image contrast and FWHM showed altogether the result of being linearly improved according to the iteration times and subset number increase in projection per 10, 20 and 30 second image but the variation coefficient did not show the tendency to be improved. Conclusion: The linear relationship of the image contrast improved in 3D OSEM reconstruction method image of applying 3D beam modeling through this experiment like the existing 1D and 2D OSEM reconfiguration method according to the iteration times and subset number increase could be confirmed. However, this is simple phantom experiment and the result of obtaining by the some patients limited range and the various variables can be existed. So for generalizing this based on this results of this experiment, there is the excessiveness and the evaluation about 3D OSEM reconfiguration method should be additionally made through experiments after this.

  • PDF

3D Modeling from 2D Stereo Image using 2-Step Hybrid Method (2단계 하이브리드 방법을 이용한 2D 스테레오 영상의 3D 모델링)

  • No, Yun-Hyang;Go, Byeong-Cheol;Byeon, Hye-Ran;Yu, Ji-Sang
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.7
    • /
    • pp.501-510
    • /
    • 2001
  • Generally, it is essential to estimate exact disparity for the 3D modeling from stereo images. Because existing methods calculate disparities from a whole image, they require too much cimputational time and bring about the mismatching problem. In this article, using the characteristic that the disparity vectors in stereo images are distributed not equally in a whole image but only exist about the background and obhect, we do a wavelet transformation on stereo images and estimate coarse disparity fields from the reduced lowpass field using area-based method at first-step. From these coarse disparity vectors, we generate disparity histogram and then separate object from background area using it. Afterwards, we restore only object area to the original image and estimate dense and accurate disparity by our two-step pixel-based method which does not use pixel brightness but use second gradient. We also extract feature points from the separated object area and estimate depth information by applying disparity vectors and camera parameters. Finally, we generate 3D model using both feature points and their z coordinates. By using our proposed, we can considerably reduce the computation time and estimate the precise disparity through the additional pixel-based method using LOG filter. Furthermore, our proposed foreground/background method can solve the mismatching problem of existing Delaunay triangulation and generate accurate 3D model.

  • PDF

A Stereo Video Avatar for Supporting Visual Communication in a $CAVE^{TM}$-like System ($CAVE^{TM}$-like 시스템에서 시각 커뮤니케이션 지원을 위한 스테레오 비디오 아바타)

  • Rhee Seon-Min;Park Ji-Young;Kim Myoung-Hee
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.33 no.6
    • /
    • pp.354-362
    • /
    • 2006
  • This paper suggests a method for generating high qualify stereo video avatar to support visual communication in a CAVE$^{TM}$-like system. In such a system because of frequent change of light projected onto screens around user, it is not easy to extract user silhouette robustly, which is an essential step to generate a video avatar. In this study, we use an infrared reflective image acquired by a grayscale camera with a longpass filter so that the change of visible light on a screen is blocked to extract robust user silhouette. In addition, using two color cameras positioned at a distance of a binocular disparity of human eyes, we acquire two stereo images of the user for fast generation and stereoscopic display of a high quality video avatar without 3D reconstruction. We also suggest a fitting algorithm of a silhouette mask on an infrared reflective image into an acquired color image to remove background. Generated stereo images of a video avatar are texture mapped into a plane in virtual world and can be displayed in stereoscopic using frame sequential stereo method. Suggested method have advantages that it generates high quality video avatar taster than 3D approach and it gives stereoscopic feeling to a user 2D based approach can not provide.

Back-Propagation Neural Network Based Face Detection and Pose Estimation (오류-역전파 신경망 기반의 얼굴 검출 및 포즈 추정)

  • Lee, Jae-Hoon;Jun, In-Ja;Lee, Jung-Hoon;Rhee, Phill-Kyu
    • The KIPS Transactions:PartB
    • /
    • v.9B no.6
    • /
    • pp.853-862
    • /
    • 2002
  • Face Detection can be defined as follows : Given a digitalized arbitrary or image sequence, the goal of face detection is to determine whether or not there is any human face in the image, and if present, return its location, direction, size, and so on. This technique is based on many applications such face recognition facial expression, head gesture and so on, and is one of important qualify factors. But face in an given image is considerably difficult because facial expression, pose, facial size, light conditions and so on change the overall appearance of faces, thereby making it difficult to detect them rapidly and exactly. Therefore, this paper proposes fast and exact face detection which overcomes some restrictions by using neural network. The proposed system can be face detection irrelevant to facial expression, background and pose rapidily. For this. face detection is performed by neural network and detection response time is shortened by reducing search region and decreasing calculation time of neural network. Reduced search region is accomplished by using skin color segment and frame difference. And neural network calculation time is decreased by reducing input vector sire of neural network. Principle Component Analysis (PCA) can reduce the dimension of data. Also, pose estimates in extracted facial image and eye region is located. This result enables to us more informations about face. The experiment measured success rate and process time using the Squared Mahalanobis distance. Both of still images and sequence images was experimented and in case of skin color segment, the result shows different success rate whether or not camera setting. Pose estimation experiments was carried out under same conditions and existence or nonexistence glasses shows different result in eye region detection. The experiment results show satisfactory detection rate and process time for real time system.

(Distance and Speed Measurements of Moving Object Using Difference Image in Stereo Vision System) (스테레오 비전 시스템에서 차 영상을 이용한 이동 물체의 거리와 속도측정)

  • 허상민;조미령;이상훈;강준길;전형준
    • Journal of the Korea Computer Industry Society
    • /
    • v.3 no.9
    • /
    • pp.1145-1156
    • /
    • 2002
  • A method to measure the speed and distance of moving object is proposed using the stereo vision system. One of the most important factors for measuring the speed and distance of moving object is the accuracy of object tracking. Accordingly, the background image algorithm is adopted to track the rapidly moving object and the local opening operator algorithm is used to remove the shadow and noise of object. The extraction efficiency of moving object is improved by using the adaptive threshold algorithm independent to variation of brightness. Since the left and right central points are compensated, the more exact speed and distance of object can be measured. Using the background image algorithm and local opening operator algorithm, the computational processes are reduced and it is possible to achieve the real-time processing of the speed and distance of moving object. The simulation results show that background image algorithm can track the moving object more rapidly than any other algorithm. The application of adaptive threshold algorithm improved the extraction efficiency of the target by reducing the candidate areas. Since the central point of the target is compensated by using the binocular parallax, the error of measurement for the speed and distance of moving object is reduced. The error rate of measurement for the distance from the stereo camera to moving object and for the speed of moving object are 2.68% and 3.32%, respectively.

  • PDF