• Title/Summary/Keyword: Image Edge

Search Result 2,464, Processing Time 0.031 seconds

Automatic analysis of golf swing from single-camera video sequences (단일 카메라 영상으로부터 골프 스윙의 자동 분석)

  • Kim, Pyeoung-Kee
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.14 no.5
    • /
    • pp.139-148
    • /
    • 2009
  • In this paper, I propose an automatic analysis method of golf swine from single-camera video sequences. I define necessary swing features for automatic swing analysis in 2-dimensional environment and present efficient swing analysis methods using various image processing techniques including line and edge detection. The proposed method has two characteristics compared with previous swing analysis systems and related studies. First, the proposed method enables an automatic swing analysis in 2-dimension while previous systems require 3-dimensional environment which is relatively complex and expensive to run. Second, swing analysis is done automatically without human intervention while other 2-dimensional systems necessarily need analysis by a golf expert. I tested the method on 20 swing video sequences and found the proposed method works effective for automatic analysis of golf swing.

Real-time Tooth Region Detection in Intraoral Scanner Images with Deep Learning (딥러닝을 이용한 구강 스캐너 이미지 내 치아 영역 실시간 검출)

  • Na-Yun, Park;Ji-Hoon Kim;Tae-Min Kim;Kyeong-Jin Song;Yu-Jin Byun;Min-Ju Kang․;Kyungkoo Jun;Jae-Gon Kim
    • Journal of Korean Society of Industrial and Systems Engineering
    • /
    • v.46 no.3
    • /
    • pp.1-6
    • /
    • 2023
  • In the realm of dental prosthesis fabrication, obtaining accurate impressions has historically been a challenging and inefficient process, often hindered by hygiene concerns and patient discomfort. Addressing these limitations, Company D recently introduced a cutting-edge solution by harnessing the potential of intraoral scan images to create 3D dental models. However, the complexity of these scan images, encompassing not only teeth and gums but also the palate, tongue, and other structures, posed a new set of challenges. In response, we propose a sophisticated real-time image segmentation algorithm that selectively extracts pertinent data, specifically focusing on teeth and gums, from oral scan images obtained through Company D's oral scanner for 3D model generation. A key challenge we tackled was the detection of the intricate molar regions, common in dental imaging, which we effectively addressed through intelligent data augmentation for enhanced training. By placing significant emphasis on both accuracy and speed, critical factors for real-time intraoral scanning, our proposed algorithm demonstrated exceptional performance, boasting an impressive accuracy rate of 0.91 and an unrivaled FPS of 92.4. Compared to existing algorithms, our solution exhibited superior outcomes when integrated into Company D's oral scanner. This algorithm is scheduled for deployment and commercialization within Company D's intraoral scanner.

Advanced Abdominal MRI Techniques and Problem-Solving Strategies (복부 자기공명영상 고급 기법과 문제 해결 전략)

  • Yoonhee Lee;Sungjin Yoon;So Hyun Park;Marcel Dominik Nickel
    • Journal of the Korean Society of Radiology
    • /
    • v.85 no.2
    • /
    • pp.345-362
    • /
    • 2024
  • MRI plays an important role in abdominal imaging because of its ability to detect and characterize focal lesions. However, MRI examinations have several challenges, such as comparatively long scan times and motion management through breath-holding maneuvers. Techniques for reducing scan time with acceptable image quality, such as parallel imaging, compressed sensing, and cutting-edge deep learning techniques, have been developed to enable problem-solving strategies. Additionally, free-breathing techniques for dynamic contrast-enhanced imaging, such as extra-dimensional-volumetric interpolated breath-hold examination, golden-angle radial sparse parallel, and liver acceleration volume acquisition Star, can help patients with severe dyspnea or those under sedation to undergo abdominal MRI. We aimed to present various advanced abdominal MRI techniques for reducing the scan time while maintaining image quality and free-breathing techniques for dynamic imaging and illustrate cases using the techniques mentioned above. A review of these advanced techniques can assist in the appropriate interpretation of sequences.

Simulation and Measurement of Signal Intensity for Various Tissues near Bone Interface in 2D and 3D Neurological MR Images (2차원과 3차원 신경계 자기공명영상에서 뼈 주위에 있는 여러 조직의 신호세기 계산 및 측정)

  • Yoo, Done-Sik
    • Progress in Medical Physics
    • /
    • v.10 no.1
    • /
    • pp.33-40
    • /
    • 1999
  • Purpose: To simulate and measure the signal intensity of various tissues near bone interface in 2D and 3D neurological MR images. Materials and Methods: In neurological proton density (PD) weighted images, every component in the head including cerebrospinal fluid (CSF), muscle and scalp, with the exception of bone, are visualised. It is possible to acquire images in 2D or 3D. A 2D fast spin-echo (FSE) sequence is chosen for the 2D acquisition and a 3D gradient-echo (GE) sequence is chosen for the 3D acquisition. To find out the signal intensities of CSF, muscle and fat (or scalp) for the 2D spin-echo(SE) and 3D gradient-echo (GE) imaging sequences, the theoretical signal intensities for 2D SE and 3D GE were calculated. For the 2D fast spin-echo (FSE) sequence, to produce the PD weighted image, long TR (4000 ms) and short TE$_{eff}$ (22 ms) were employed. For the 3D GE sequence, low flip angle (8$^{\circ}$) with short TR (35 ms) and short TE (3 ms) was used to produce the PD weighted contrast. Results: The 2D FSE sequence has CSF, muscle and scalp with superior image contrast and SNR of 39 - 57 while the 3D GE sequence has CSF, muscle and scalp with broadly similar image contrast and SNR of 26 - 33. SNR in the FSE image were better than those in the GE image and the skull edges appeared very clearly in the FSE image due to the edge enhancement effect in the FSE sequence. Furthermore, the contrast between CSF, muscle and scalp in the 2D FSE image was significantly better than in the 3D GE image, due to the strong signal intensities (or SNR) from CSF, muscle and scalp and enhanced edges of CSF. Conclusion: The signal intensity of various tissues near bone interface in neurological MR images has been simulated and measured. Both the simulation and imaging of the 2D SE and 3D GE sequences have CSF, fat and muscle with broadly similar image intensity and SNR's and have succeeded in getting all tissues about the same signal. However, in the 2D FSE sequence, image contrast between CSF, muscle and scalp was good and SNR was relatively high, imaging time was relatively short.

  • PDF

Evaluation of Image Quality for Various Electronic Portal Imaging Devices in Radiation Therapy (방사선치료의 다양한 EPID 영상 질평가)

  • Son, Soon-Yong;Choi, Kwan-Woo;Kim, Jung-Min;Jeong, Hoi-Woun;Kwon, Kyung-Tae;Cho, Jeong-Hee;Lee, Jea-Hee;Jung, Jae-Yong;Kim, Ki-Won;Lee, Young-Ah;Son, Jin-Hyun;Min, Jung-Whan
    • Journal of radiological science and technology
    • /
    • v.38 no.4
    • /
    • pp.451-461
    • /
    • 2015
  • In megavoltage (MV) radiotherapy, delivering the dose to the target volume is important while protecting the surrounding normal tissue. The purpose of this study was to evaluate the modulation transfer function (MTF), the noise power spectrum (NPS), and the detective quantum efficiency (DQE) using an edge block in megavoltage X-ray imaging (MVI). We used an edge block, which consists of tungsten with dimensions of 19 (thickness) ${\times}$ 10 (length) ${\times}$ 1 (width) $cm^3$ and measured the pre-sampling MTF at 6 MV energy. Various radiation therapy (RT) devices such as TrueBeam$^{TM}$ (Varian), BEAMVIEW$^{PLUS}$ (Siemens), iViewGT (Elekta) and Clinac$^{(R)}$iX (Varian) were used. As for MTF results, TrueBeam$^{TM}$(Varian) flattening filter free(FFF) showed the highest values of $0.46mm^{-1}$ and $1.40mm^{-1}$ for MTF 0.5 and 0.1. In NPS, iViewGT (Elekta) showed the lowest noise distribution. In DQE, iViewGT (Elekta) showed the best efficiency at a peak DQE and $1mm^{-1}DQE$ of 0.0026 and 0.00014, respectively. This study could be used not only for traditional QA imaging but also for quantitative MTF, NPS, and DQE measurement for development of an electronic portal imaging device (EPID).

Dual Dictionary Learning for Cell Segmentation in Bright-field Microscopy Images (명시야 현미경 영상에서의 세포 분할을 위한 이중 사전 학습 기법)

  • Lee, Gyuhyun;Quan, Tran Minh;Jeong, Won-Ki
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.3
    • /
    • pp.21-29
    • /
    • 2016
  • Cell segmentation is an important but time-consuming and laborious task in biological image analysis. An automated, robust, and fast method is required to overcome such burdensome processes. These needs are, however, challenging due to various cell shapes, intensity, and incomplete boundaries. A precise cell segmentation will allow to making a pathological diagnosis of tissue samples. A vast body of literature exists on cell segmentation in microscopy images [1]. The majority of existing work is based on input images and predefined feature models only - for example, using a deformable model to extract edge boundaries in the image. Only a handful of recent methods employ data-driven approaches, such as supervised learning. In this paper, we propose a novel data-driven cell segmentation algorithm for bright-field microscopy images. The proposed method minimizes an energy formula defined by two dictionaries - one is for input images and the other is for their manual segmentation results - and a common sparse code, which aims to find the pixel-level classification by deploying the learned dictionaries on new images. In contrast to deformable models, we do not need to know a prior knowledge of objects. We also employed convolutional sparse coding and Alternating Direction of Multiplier Method (ADMM) for fast dictionary learning and energy minimization. Unlike an existing method [1], our method trains both dictionaries concurrently, and is implemented using the GPU device for faster performance.

Adaptive Block Watermarking Based on JPEG2000 DWT (JPEG2000 DWT에 기반한 적응형 블록 워터마킹 구현)

  • Lim, Se-Yoon;Choi, Jun-Rim
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.44 no.11
    • /
    • pp.101-108
    • /
    • 2007
  • In this paper, we propose and verify an adaptive block watermarking algorithm based on JPEG2000 DWT, which determines watermarking for the original image by two scaling factors in order to overcome image degradation and blocking problem at the edge. Adaptive block watermarking algorithm uses 2 scaling factors, one is calculated by the ratio of present block average to the next block average, and the other is calculated by the ratio of total LL subband average to each block average. Signals of adaptive block watermark are obtained from an original image by itself and the strength of watermark is automatically controlled by image characters. Instead of conventional methods using identical intensity of a watermark, the proposed method uses adaptive watermark with different intensity controlled by each block. Thus, an adaptive block watermark improves the visuality of images by 4$\sim$14dB and it is robust against attacks such as filtering, JPEG2000 compression, resizing and cropping. Also we implemented the algorithm in ASIC using Hynix 0.25${\mu}m$ CMOS technology to integrate it in JPEG2000 codec chip.

Automated Improvement of RapidEye 1-B Geo-referencing Accuracy Using 1:25,000 Digital Maps (1:25,000 수치지도를 이용한 RapidEye 위성영상의 좌표등록 정확도 자동 향상)

  • Oh, Jae Hong;Lee, Chang No
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.32 no.5
    • /
    • pp.505-513
    • /
    • 2014
  • The RapidEye can acquire the 6.5m spatial resolution satellite imagery with the high temporal resolution on each day, based on its constellation of five satellites. The image products are available in two processing levels of Basic 1B and Ortho 3A. The Basic 1B image have radiometric and sensor corrections and include RPCs (Rational Polynomial Coefficients) data. In Korea, the geometric accuracy of RapidEye imagery can be improved, based on the scaled national digital maps that had been built. In this paper, we present the fully automated procedures to georegister the 1B data using 1:25,000 digital maps. Those layers of map are selected if the layers appear well in the RapidEye image, and then the selected layers are RPCs-projected into the RapidEye 1B space for generating vector images. The automated edge-based matching between the vector image and RapidEye improves the accuracy of RPCs. The experimental results showed the accuracy improvement from 2.8 to 0.8 pixels in RMSE when compared to the maps.

Subpixel Shift Estimation in Noisy Image Using Iterative Phase Correlation of A Selected Local Region (잡음 영상에서 국부 영역의 반복적인 위상 상관도를 이용한 부화소 이동량 추정방법)

  • Ha, Ho-Gun;Jang, In-Su;Ko, Kyung-Woo;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.47 no.1
    • /
    • pp.103-119
    • /
    • 2010
  • In this paper, we propose a subpixel shift estimation method using phase correlation with a local region for the registration of noisy images. Phase correlation is commonly used to estimate the subpixel shift between images, which is derived from analyzing shifted and downsampled images. However, when the images are affected by additive white Gaussian noise and aliasing artifacts, the estimation error is increased. Thus, instead of using the whole image, the proposed method uses a specific local region that is less affect by noises. In addition, to improve the estimation accuracy, iterative phase correlation is applied between selected local regions rather than using a fitting function. the restricted range is determined by analyzing the maximum peak and the two adjacent values of the inverse Fourier transform of the normalized cross power spectrum. In the experiments, the proposed method shows higher accuracy in registering noisy images than the other methods. Thus, the edge-sharpness and clearness in the super-resolved image is also improved.

Image Contrast Enhancement by Illumination Change Detection (조명 변화 감지에 의한 영상 콘트라스트 개선)

  • Odgerel, Bayanmunkh;Lee, Chang Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • There are many image processing based algorithms and applications that fail when illumination change occurs. Therefore, the illumination change has to be detected then the illumination change occurred images need to be enhanced in order to keep the appropriate algorithm processing in a reality. In this paper, a new method for detecting illumination changes efficiently in a real time by using local region information and fuzzy logic is introduced. The effective way for detecting illumination changes in lighting area and the edge of the area was selected to analyze the mean and variance of the histogram of each area and to reflect the changing trends on previous frame's mean and variance for each area of the histogram. The ways are used as an input. The changes of mean and variance make different patterns w hen illumination change occurs. Fuzzy rules were defined based on the patterns of the input for detecting illumination changes. Proposed method was tested with different dataset through the evaluation metrics; in particular, the specificity, recall and precision showed high rates. An automatic parameter selection method was proposed for contrast limited adaptive histogram equalization method by using entropy of image through adaptive neural fuzzy inference system. The results showed that the contrast of images could be enhanced. The proposed algorithm is robust to detect global illumination change, and it is also computationally efficient in real applications.