• Title/Summary/Keyword: Approach Detection System

Search Result 893, Processing Time 0.029 seconds

Detecting Incipient Caries Using Front-illuminated Infrared Light Scattering Imaging

  • Kim, Ji-Young;Ro, Jung-Hoon;Jeon, Gye-Rok;Kim, Jin-Bom;Ye, Soo-Young
    • Transactions on Electrical and Electronic Materials
    • /
    • v.13 no.6
    • /
    • pp.310-316
    • /
    • 2012
  • A new method for early caries diagnosis was proposed and tested through a home-made optical examination system that used quantitative light fluorescence (QLF) and digital imaging fiber optic transillumination (FOTI) (DIFOTI), with light sources across a wide spectral range, from 350 nm to 1,000 nm. The front-illuminated infrared light scattering image (FIR) showed similar diagnostic abilities to that of DIFOTI. The FIR method was invented based on the observation that caries lesions lose the high transmittance and low scattering properties of benign enamel tissue. There are various methods for the early diagnosis of caries, such as visual examination, exploration, X-ray radiography, QLF, FOTI, and infrared fluorescence (diagnodent). Among them, methods based on optical properties are regarded as having the most potential. A comparative study was performed between the FOTI, QLF, diagnodent, optical coherence tomography, and FIR scattering image methods, using 20 extracted teeth samples with early caries. A scale of lesion measurement based on optical image contrast was proposed. The statistical analysis showed a significant correlation between the DIFOTI and FIR methods (r = 0.35, p < 0.05). However, the QLF and diagnodent methods showed little association with FIR images, as they have different detection principles as compared with FIR. Tomographic images obtained by OCT, using 1,330 nm super luminescent LED as a gold standard of tooth structure, verified that the FOTI and FIR results correctly represented the lack of homogeneity in dental tissue. The newly proposed FIR method attained similar diagnostic results to those of FOTI, but with an easier approach.

Human Risk Assessment of Perchloroethylene Considering Multi-media Exposure (다매체 노출을 고려한 Perchloroethylene의 인체위해성평가연구)

  • Seo, Jungkwan;Kim, Taksoo;Jo, Areum;Kim, Pilje;Choi, Kyunghee
    • Journal of Environmental Health Sciences
    • /
    • v.40 no.5
    • /
    • pp.397-406
    • /
    • 2014
  • Objectives: Perchloroethylene (PCE) is a volatile chemical widely used as a solvent in the dry-cleaning and textile processing industries. It was evaluated as Group 2 "probably carcinogenic to humans" by the Integrated Risk Information System (IRIS) of the United State Environmental Protection Agency (U.S. EPA) in 2012. In order to provide a scientific basis for establishing risk management measures for chemicals on the national priority substances list, aggregate risk assessment was conducted for PCE, included in the top-10 substances. Methods: We conducted the investigation and monitoring of PCE exposure (e.g., exposure scenario, detection levels, and exposure factors, etc.) and assessed its multi-media (e.g., outdoor air, indoor air, and ground water) exposure risk with a deterministic and probabilistic approach. Results: In human risk assessment (HRA), the level of human exposure was higher in the younger age group. The exposure level through inhalation at home was the highest among the exposure routes. Outdoor air or uptake of drinking water represented less than 1% of total contributions to PCE exposure. These findings suggested that the level of risk was negligible since the Hazard Index (HI) induced by HRA was below one among all age groups, with a maximum HI value of 0.17 when reasonable maximum exposure was applied. Conclusion: In conclusion, it was suggested that despite low exposure risk, further studies are needed considering main sources, including occupational exposure.

DCT-based Digital Dropout Detection using SVM (SVM을 이용한 DCT 기반의 디지털 드롭아웃 검출)

  • Song, Gihun;Ryu, Byungyong;Kim, Jaemyun;Ahn, Kiok;Chae, Oksam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.7
    • /
    • pp.190-200
    • /
    • 2014
  • The video-based system of the broadcasters and the video-related institutions have shifted from analogical to digital in worldwide. This migration process can generate a defect, digital dropout, in the quality of the contents. Moreover, there are limited researches focused on these kind of defects and those related have limitations. For that reason, we are proposing a new method for feature extraction emphasizing in the peculiar block pattern of digital dropout based on discrete cosine transform (DCT). For classification of error block, we utilize support vector machine (SVM) which can manage feature vectors efficiently. Further, the proposed method overcome the limitation of the previous one using continuity of frame by frame. It is using only the information of a single frame and works better even in the presence of fast moving objects, without the necessity of specific model or parameter estimation. Therefore, this approach is capable of detecting digital dropout only with minimal complexity.

Sensorless Vector Control Using Tabu Search Algorithm (타부 탐색을 이용한 센서리스 벡터 제어)

  • Lee, Yang-Woo;Park, Kyung-Hun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.12
    • /
    • pp.2625-2632
    • /
    • 2009
  • Recently, a speed control method of induction motor by vector control theory is applied to highly efficient industrial field. The speed sensors attached to motor are used for detection of rotating speed. In the case using speed sensor, the installation of cable for minimization of electric noise, weaken maintenance, increase of price are demerit. Therefore the study of speed sensorless vector control theory performed activity. The design of sensorless vector controller for induction motor using tabu search is studied. The proposed sensorless vector control for Induction Motor is composed of two parts. The first part is for optimizing the speed estimation with initial PI parameters. The second part is for optimizing the speed control with initial PI parameters using tabu search. Proposed tabu search is improved by neighbor solution creation using Triangular random distribution. In order to show the usefulness of the proposed method, we apply the proposed controller to the sensorless speed control of an actual AC induction Motor System. The performance of this approach is verified through simulation and the experiment.

AUTOMATIC PRECISION CORRECTION OF SATELLITE IMAGES

  • Im, Yong-Jo;Kim, Tae-Jung
    • Proceedings of the KSRS Conference
    • /
    • 2002.10a
    • /
    • pp.40-44
    • /
    • 2002
  • Precision correction is the process of geometrically aligning images to a reference coordinate system using GCPs(Ground Control Points). Many applications of remote sensing data, such as change detection, mapping and environmental monitoring, rely on the accuracy of precision correction. However it is a very time consuming and laborious process. It requires GCP collection, the identification of image points and their corresponding reference coordinates. At typical satellite ground stations, GCP collection requires most of man-powers in processing satellite images. A method of automatic registration of satellite images is demanding. In this paper, we propose a new algorithm for automatic precision correction by GCP chips and RANSAC(Random Sample Consensus). The algorithm is divided into two major steps. The first one is the automated generation of ground control points. An automated stereo matching based on normalized cross correlation will be used. We have improved the accuracy of stereo matching by determining the size and shape of match windows according to incidence angle and scene orientation from ancillary data. The second one is the robust estimation of mapping function from control points. We used the RANSAC algorithm for this step and effectively removed the outliers of matching results. We carried out experiments with SPOT images over three test sites which were taken at different time and look-angle with each other. Left image was used to select UP chipsets and right image to match against GCP chipsets and perform automatic registration. In result, we could show that our approach of automated matching and robust estimation worked well for automated registration.

  • PDF

Radionuclide Angiocardiographic Evaluation of Left-to-Right Cardiac Shunts: Analysis of Time-Activity Curves (핵의학적 심혈관 촬영술에 의한 좌우 심단락의 진단 : 시간-방사능 곡선의 분석)

  • Kim, Ok-Hwa;Bahk, Yong-Whee;Kim, Chi-Kyung
    • The Korean Journal of Nuclear Medicine
    • /
    • v.21 no.2
    • /
    • pp.155-165
    • /
    • 1987
  • The noninvasive nature of the radionuclide angiocardiography provided a useful approach for the evaluation of left-to-right cardiac shunts (LRCS). While the qualitative information can be obtained by inspection of serial radionuclide angiocardiograms, the quantitative information of radionuclide angiocardiography can be obtained by the analysis-of time-activity curves using advanced computer system. The count ratios method and pulmonary-to-systemic flow ratio (QP/QS) by gamma variate fit method were used to evaluate the accuracy of detection and localization of LRCS. One hundred and ten time-activity curves were analyzed. There were 46 LRCS (atrial septal defects 11, ventricular septal defects 22, patent ductus arteriosus 13) and 64 normal subjects. By computer analysis of time-activity curves of the right atrium, ventricle and the lungs separately, the count ratios modified by adding the mean cardiac transit time were calculated in each anatomic site. In normal subjects the mean count ratios in the right atrium, ventricle and lungs were 0.24 on average. In atrial septal defects, the count ratios were high in the right atrium, ventricle and lungs, whereas in ventricular septal defects the count ratios were higher only in the right ventricle and lungs. Patent ductus arteriosus showed normal count ratios in the heart but high count ratios were obtained in the lungs. Thus, this count ratios method could be separated normal from those with intra cardiac or extracardiac shunts, and moreover, with this method the localization of the shunt level was possible in LRCS. Another method that could differentiate the intracardiac shunts from extracardiac shunts was measuring QP/QS in the left and right lungs. In patent ductus arteriosus, the left lung QP/QS was higher than those of the right lung, whereas in atrial septal defects and ventricular septal defects QP/QS ratios were equal in both lungs. From this study, it was found that by measuring QP/QS separately in the lungs, intracardiac shunt could be differenciated from extracardiac shunts.

  • PDF

Improved Feature Extraction Method for the Contents Polluter Detection in Social Networking Service (SNS에서 콘텐츠 오염자 탐지를 위한 개선된 특징 추출 방법)

  • Han, Jin Seop;Park, Byung Joon
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.11
    • /
    • pp.47-54
    • /
    • 2015
  • The number of users of SNS such as Twitter and Facebook increases due to the development of internet and the spread of supply of mobile devices such as smart phone. Moreover, there are also an increasing number of content pollution problems that pollute SNS by posting a product advertisement, defamatory comment and adult contents, and so on. This paper proposes an improved method of extracting the feature of content polluter for detecting a content polluter in SNS. In particular, this paper presents a method of extracting the feature of content polluter on the basis of incremental approach that considers only increment in data, not batch processing system of entire data in order to efficiently extract the feature value of new user data at the stage of predicting and classifying a content polluter. And it comparatively assesses whether the proposed method maintains classification accuracy and improves time efficiency in comparison with batch processing method through experiment.

Complexity Estimation Based Work Load Balancing for a Parallel Lidar Waveform Decomposition Algorithm

  • Jung, Jin-Ha;Crawford, Melba M.;Lee, Sang-Hoon
    • Korean Journal of Remote Sensing
    • /
    • v.25 no.6
    • /
    • pp.547-557
    • /
    • 2009
  • LIDAR (LIght Detection And Ranging) is an active remote sensing technology which provides 3D coordinates of the Earth's surface by performing range measurements from the sensor. Early small footprint LIDAR systems recorded multiple discrete returns from the back-scattered energy. Recent advances in LIDAR hardware now make it possible to record full digital waveforms of the returned energy. LIDAR waveform decomposition involves separating the return waveform into a mixture of components which are then used to characterize the original data. The most common statistical mixture model used for this process is the Gaussian mixture. Waveform decomposition plays an important role in LIDAR waveform processing, since the resulting components are expected to represent reflection surfaces within waveform footprints. Hence the decomposition results ultimately affect the interpretation of LIDAR waveform data. Computational requirements in the waveform decomposition process result from two factors; (1) estimation of the number of components in a mixture and the resulting parameter estimates, which are inter-related and cannot be solved separately, and (2) parameter optimization does not have a closed form solution, and thus needs to be solved iteratively. The current state-of-the-art airborne LIDAR system acquires more than 50,000 waveforms per second, so decomposing the enormous number of waveforms is challenging using traditional single processor architecture. To tackle this issue, four parallel LIDAR waveform decomposition algorithms with different work load balancing schemes - (1) no weighting, (2) a decomposition results-based linear weighting, (3) a decomposition results-based squared weighting, and (4) a decomposition time-based linear weighting - were developed and tested with varying number of processors (8-256). The results were compared in terms of efficiency. Overall, the decomposition time-based linear weighting work load balancing approach yielded the best performance among four approaches.

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.

Development of a Deep Learning-Based Automated Analysis System for Facial Vitiligo Treatment Evaluation (안면 백반증 치료 평가를 위한 딥러닝 기반 자동화 분석 시스템 개발)

  • Sena Lee;Yeon-Woo Heo;Solam Lee;Sung Bin Park
    • Journal of Biomedical Engineering Research
    • /
    • v.45 no.2
    • /
    • pp.95-100
    • /
    • 2024
  • Vitiligo is a condition characterized by the destruction or dysfunction of melanin-producing cells in the skin, resulting in a loss of skin pigmentation. Facial vitiligo, specifically affecting the face, significantly impacts patients' appearance, thereby diminishing their quality of life. Evaluating the efficacy of facial vitiligo treatment typically relies on subjective assessments, such as the Facial Vitiligo Area Scoring Index (F-VASI), which can be time-consuming and subjective due to its reliance on clinical observations like lesion shape and distribution. Various machine learning and deep learning methods have been proposed for segmenting vitiligo areas in facial images, showing promising results. However, these methods often struggle to accurately segment vitiligo lesions irregularly distributed across the face. Therefore, our study introduces a framework aimed at improving the segmentation of vitiligo lesions on the face and providing an evaluation of vitiligo lesions. Our framework for facial vitiligo segmentation and lesion evaluation consists of three main steps. Firstly, we perform face detection to minimize background areas and identify the face area of interest using high-quality ultraviolet photographs. Secondly, we extract facial area masks and vitiligo lesion masks using a semantic segmentation network-based approach with the generated dataset. Thirdly, we automatically calculate the vitiligo area relative to the facial area. We evaluated the performance of facial and vitiligo lesion segmentation using an independent test dataset that was not included in the training and validation, showing excellent results. The framework proposed in this study can serve as a useful tool for evaluating the diagnosis and treatment efficacy of vitiligo.