• Title/Summary/Keyword: camera image

Search Result 4,917, Processing Time 0.035 seconds

End to End Autonomous Driving System using Out-layer Removal (Out-layer를 제거한 End to End 자율주행 시스템)

  • Seung-Hyeok Jeong;Dong-Ho Yun;Sung-Hun Hong
    • Journal of Internet of Things and Convergence
    • /
    • v.9 no.1
    • /
    • pp.65-70
    • /
    • 2023
  • In this paper, we propose an autonomous driving system using an end-to-end model to improve lane departure and misrecognition of traffic lights in a vision sensor-based system. End-to-end learning can be extended to a variety of environmental conditions. Driving data is collected using a model car based on a vision sensor. Using the collected data, it is composed of existing data and data with outlayers removed. A class was formed with camera image data as input data and speed and steering data as output data, and data learning was performed using an end-to-end model. The reliability of the trained model was verified. Apply the learned end-to-end model to the model car to predict the steering angle with image data. As a result of the learning of the model car, it can be seen that the model with the outlayer removed is improved than the existing model.

Visual Tracking Technique Based on Projective Modular Active Shape Model (투영적 모듈화 능동 형태 모델에 기반한 영상 추적 기법)

  • Kim, Won
    • Journal of the Korea Society for Simulation
    • /
    • v.18 no.2
    • /
    • pp.77-89
    • /
    • 2009
  • Visual tracking technique is one of the essential things which are very important in the major fields of modern society. While contour tracking is especially necessary technique in the aspect of its fast performance with target's external contour information, it sometimes fails to track target motion because it is affected by the surrounding edges around target and weak egdes on the target boundary. To overcome these weak points, in this research it is suggested that PDMs can be obtained by generating the virtual 6-DOF motions of the mobile robot with a CCD camera and the image tracking system which is robust to the local minima around the target can be configured by constructing Active Shape Model in modular base. To show the effectiveness of the proposed method, the experiment is performed on the image stream obtained by a real mobile robot and the better performance is confirmed by comparing the experimental results with the ones of other major tracking techniques.

An Accurate Forward Head Posture Detection using Human Pose and Skeletal Data Learning

  • Jong-Hyun Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.87-93
    • /
    • 2023
  • In this paper, we propose a system that accurately and efficiently determines forward head posture based on network learning by analyzing the user's skeletal posture. Forward head posture syndrome is a condition in which the forward head posture is changed by keeping the neck in a bent forward position for a long time, causing pain in the back, shoulders, and lower back, and it is known that daily posture habits are more effective than surgery or drug treatment. Existing methods use convolutional neural networks using webcams, and these approaches are affected by the brightness, lighting, skin color, etc. of the image, so there is a problem that they are only performed for a specific person. To alleviate this problem, this paper extracts the skeleton from the image and learns the data corresponding to the side rather than the frontal view to find the forward head posture more efficiently and accurately than the previous method. The results show that the accuracy is improved in various experimental scenes compared to the previous method.

Learning efficiency checking system by measuring human motion detection (사람의 움직임 감지를 측정한 학습 능률 확인 시스템)

  • Kim, Sukhyun;Lee, Jinsung;Yu, Eunsang;Park, Seon-u;Kim, Eung-Tae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.290-293
    • /
    • 2021
  • In this paper, we implement a learning efficiency verification system to inspire learning motivation and help improve concentration by detecting the situation of the user studying. To this aim, data on learning attitude and concentration are measured by extracting the movement of the user's face or body through a real-time camera. The Jetson board was used to implement the real-time embedded system, and a convolutional neural network (CNN) was implemented for image recognition. After detecting the feature part of the object using a CNN, motion detection is performed. The captured image is shown in a GUI written in PYQT5, and data is collected by sending push messages when each of the actions is obstructed. In addition, each function can be executed on the main screen made with the GUI, and functions such as a statistical graph that calculates the collected data, To do list, and white noise are performed. Through learning efficiency checking system, various functions including data collection and analysis of targets were provided to users.

  • PDF

Automatic Defects Recognition System for Visual Inspection on Concrete Tunnel Lining (콘크리트 터널 라이닝의 외관조사를 위한 자동화 결함인식 시스템 개발)

  • Park, Seok-Kyun;Lee, Kang-Moon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.6A
    • /
    • pp.873-880
    • /
    • 2008
  • When checking the state of deterioration or damage structures, regular visual inspection has very important role. At this point, a visual inspection is performed mainly by sketch or photography with a camera of inspectors. If that happens, it takes a lot of effort and time to inspect appearance damages. The purpose of this study is to develop the automatic recognition system for a more efficient and effective inspection of appearance damages. In the process, the image processing technology and the data management & analysis system for damage recognition are mainly developed and applied. This automatic recognition system enables inspectors or clients to obtain correct data that can recognize a damage, such as, crack, water leakage, efflorescence, delamination (peeling), spalling, etc. In addition, this study takes aim at the effect of secure safety, functional maintenance and extension of design lifetime according to build up continuous and systematic data management system.

Suppression of Moiré Fringes Using Hollow Glass Microspheres for LED Screen (중공 미소 유리구를 이용한 LED 스크린 모아레 억제)

  • Songeun Hong;Jeongpil Na;Mose Jung;Gieun Kim;Jongwoon Park
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.28-35
    • /
    • 2023
  • Moiré patterns emerge due to the interference between the non-emission area of the LED screen and the grid line in an image sensor of a video recording device when taking a video in the presence of the LED screen. To reduce the moiré intensity, we have fabricated an anti-moiré filter using hollow glass microspheres (HGMs) by slot-die coating. The LED screen has a large non-emission area because of a large pitch (distance between LED chips), causing more severe moiré phenomenon, compared with a display panel having a very narrow black matrix (BM). It is shown that HGMs diffuse light in such a way that the periodicity of the screen is broken and thus the moiré intensity weakens. To quantitatively analyze its moiré suppression capability, we have calculated the spatial frequencies of the moiré fringes using fast Fourier transform. It is addressed that the moiré phenomenon is suppressed and thus the amplitude of each discrete spatial frequency term is reduced as the HGM concentration is increased. Using the filter with the HGM concentration of 9 wt%, the moiré fringes appeared depending sensitively on the distance between the LED screen and the camera are almost completely removed and the visibility of a nature image is enhanced at a sacrifice of luminance.

  • PDF

The nasoalveolar molding technique versus DynaCleft nasal elevator application in infants with unilateral cleft lip and palate

  • Abdallah Bahaa;Nada El-Bagoury;Noura Khaled;Sameera Mohamed;Ahmed Bahaa;Ahmed Mohamed Ibrahim;Khaled Mohamad Taha;Mohsena Ahmad Abdarrazik
    • Archives of Craniofacial Surgery
    • /
    • v.25 no.3
    • /
    • pp.123-132
    • /
    • 2024
  • Background: The introduction of presurgical nasoalveolar molding represented a significant departure from traditional molding methods. Developed by Grayson and colleagues in 1993, this technique combines an intraoral molding device with a nasal molding stent. This study aimed to compare the Grayson nasoalveolar molding appliance versus DynaCleft appliance as two methods of presurgical nasoalveolar molding. Methods: A single-blinded, randomized, parallel-arm clinical trial was conducted. Sixteen infants with complete unilateral cleft lip and palate were enrolled and divided into two groups of eight. Group 1 was treated with a modified Grayson nasoalveolar molding appliance that included a nasal stent, while group 2 was treated with DynaCleft elastic adhesive tape and an external nasal elevator. Standardized digital photographs of each infant were taken at baseline and post-treatment using a professional camera. Nine extraoral anthropometric measurements were obtained from each image using image measurement software. Results: The modified Grayson nasoalveolar appliance demonstrated a more significant improvement compared to DynaCleft in terms of alar length projection (on both sides), columella angle, and nasal tip projection. Symmetry ratios also showed enhancement, with significant improvements observed in nasal width, nasal basal width, and alar length projection (p< 0.05). Conclusion: Both the modified Grayson nasoalveolar appliance and DynaCleft appear to be effective presurgical infant orthopedics treatment options, demonstrating improvements in nasolabial aesthetics. The modified Grayson appliance, equipped with a nasal stent, improved nasal symmetry more effectively than DynaCleft, resulting in a straighter columella and a more medially positioned nasal tip.

A high-density gamma white spots-Gaussian mixture noise removal method for neutron images denoising based on Swin Transformer UNet and Monte Carlo calculation

  • Di Zhang;Guomin Sun;Zihui Yang;Jie Yu
    • Nuclear Engineering and Technology
    • /
    • v.56 no.2
    • /
    • pp.715-727
    • /
    • 2024
  • During fast neutron imaging, besides the dark current noise and readout noise of the CCD camera, the main noise in fast neutron imaging comes from high-energy gamma rays generated by neutron nuclear reactions in and around the experimental setup. These high-energy gamma rays result in the presence of high-density gamma white spots (GWS) in the fast neutron image. Due to the microscopic quantum characteristics of the neutron beam itself and environmental scattering effects, fast neutron images typically exhibit a mixture of Gaussian noise. Existing denoising methods in neutron images are difficult to handle when dealing with a mixture of GWS and Gaussian noise. Herein we put forward a deep learning approach based on the Swin Transformer UNet (SUNet) model to remove high-density GWS-Gaussian mixture noise from fast neutron images. The improved denoising model utilizes a customized loss function for training, which combines perceptual loss and mean squared error loss to avoid grid-like artifacts caused by using a single perceptual loss. To address the high cost of acquiring real fast neutron images, this study introduces Monte Carlo method to simulate noise data with GWS characteristics by computing the interaction between gamma rays and sensors based on the principle of GWS generation. Ultimately, the experimental scenarios involving simulated neutron noise images and real fast neutron images demonstrate that the proposed method not only improves the quality and signal-to-noise ratio of fast neutron images but also preserves the details of the original images during denoising.

Design of a Mapping Framework on Image Correction and Point Cloud Data for Spatial Reconstruction of Digital Twin with an Autonomous Surface Vehicle (무인수상선의 디지털 트윈 공간 재구성을 위한 이미지 보정 및 점군데이터 간의 매핑 프레임워크 설계)

  • Suhyeon Heo;Minju Kang;Jinwoo Choi;Jeonghong Park
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.61 no.3
    • /
    • pp.143-151
    • /
    • 2024
  • In this study, we present a mapping framework for 3D spatial reconstruction of digital twin model using navigation and perception sensors mounted on an Autonomous Surface Vehicle (ASV). For improving the level of realism of digital twin models, 3D spatial information should be reconstructed as a digitalized spatial model and integrated with the components and system models of the ASV. In particular, for the 3D spatial reconstruction, color and 3D point cloud data which acquired from a camera and a LiDAR sensors corresponding to the navigation information at the specific time are required to map without minimizing the noise. To ensure clear and accurate reconstruction of the acquired data in the proposed mapping framework, a image preprocessing was designed to enhance the brightness of low-light images, and a preprocessing for 3D point cloud data was included to filter out unnecessary data. Subsequently, a point matching process between consecutive 3D point cloud data was conducted using the Generalized Iterative Closest Point (G-ICP) approach, and the color information was mapped with the matched 3D point cloud data. The feasibility of the proposed mapping framework was validated through a field data set acquired from field experiments in a inland water environment, and its results were described.

F-18-FDG Whole Body Scan using Gamma Camera equipped with Ultra High Energy Collimator in Cancer Patients: Comparison with FDG Coincidence PET (종양 환자에서 초고에너지(511 keV) 조준기를 이용한 전신 F-18-FDG 평면 영상: Coincidence 감마카메라 단층 촬영 영상과의 비교)

  • Pai, Moon-Sun;Park, Chan-H.;Joh, Chul-Woo;Yoon, Seok-Nam;Yang, Seung-Dae;Lim, Sang-Moo
    • The Korean Journal of Nuclear Medicine
    • /
    • v.33 no.1
    • /
    • pp.65-75
    • /
    • 1999
  • Purpose: The aim of this study is to demonstrate the feasibility of 2-[fluorine-18] fluoro-2-deoxy-D-glucose (F-18-FDG) whole body scan (FDG W/B Scan) using dual-head gamma camera equipped with ultra high energy collimator in patients with various cancers, and compare the results with those of coincidence imaging. Materials and Methods: Phantom studies of planar imaging with ultra high energy and coincidence tomography (FDG CoDe PET) were performed. Fourteen patients with known or suspected malignancy were examined. F-18-FDG whole body scan was performed using dual-head gamma camera with high energy (511 keV) collimators and regional FDG CoDe PET immediately followed it Radiological, clinical follow up and histologic results were correlated with F-18-FDG findings. Results: Planar phantom study showed 13.1 mm spatial resolution at 10 cm with a sensitivity of 2638 cpm/MBq/ml. In coincidence PET, spatial resolution was 7.49 mm and sensitivity was 5351 cpm/MBq/ml. Eight out of 14 patients showed hypermetabolic sites in primary or metastatic tumors in FDG CoDe PET. The lesions showing no hypermetabolic uptake of FDG in both methods were all less than 1 cm except one lesion of 2 cm sized metastatic lymph node. The metastatic lymph nodes of positive FDG uptake were more than 1.5 cm in size or conglomerated lesions of lymph nodes less than 1cm in size. FDG W/B scan showed similar results but had additional false positive and false negative cases. FDG W/B scan could not visualize liver metastasis in one case that showed multiple metastatic sites in FDG CoDe PET. Conclusion: FDG W/B scan with specially designed collimators depicted some cancers and their metastatic sites, although it had a limitation in image quality compared to that of FDG CoDe PET. This study suggests that F-18-FDG positron imaging using dual-head gamma camera is feasible in oncology and helpful if it should be more available by regional distribution of FDG.

  • PDF