• Title/Summary/Keyword: Image Labeling

Search Result 374, Processing Time 0.03 seconds

Development of $^{99m}Tc$-Transferrin as an Imaging Agent of Infectious Foci (감염병소 영상을 위한 $^{99m}Tc$-Transferrin 개발)

  • Kim, Seong-Min;Song, Ho-Chun
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.40 no.3
    • /
    • pp.177-185
    • /
    • 2006
  • Purpose: Purpose of this study is to synthesize $^{99m}Tc$-labeled transferrin for injection imaging and to compare it with $^{67}Ga$-titrate for the detection of infectious foci. Materials and methods: Succinimidyl 6-hydrazino-nicotinate hydrochloride-chitosan-transferrin (Transferrin) was synthesized and radiolabeled with $^{99m}Tc$. Labeling efficiencies of $^{99m}Tc$-Transferrin were determined at 10 min, 30 min, 1 hr, 2 hr, 4 hr and 8 hr. Biodistribution and imaging studies with $^{99m}Tc$-Transferrin and $^{67}Ga$-citrate were performed in a rat abscess model induced with approximately $2{\times}10^8$ colony forming unit of Staphylococcus aureus ATCC 25923. Results: Successful synthesis of Transferrin was confirmed by mass spectrometry. Labeling efficiency of $^{99m}Tc$-Transferrin was $96.2{\pm}0.7%,\;96.4{\pm}0.5%,\;96.6{\pm}1.0%,\;96.9{\pm}0.5%,\;97.0{\pm}0.7%\;and\;95.5{\pm}0.7%$ at 10 min, 30 min, 1 hr, 2 hr, 4 hr and 8 hr, respectively. The injected dose per tissue gram of $^{99m}Tc$-Transferrin was $0.18{\pm}0.01\;and\;0.18{\pm}0.01$ in the lesion and $0.05{\pm}0.01\;and\;0.04{\pm}0.01$ in the normal muscle, and lesion-to-normal muscle uptake ratio was $3.7{\pm}0.6\;and\;4.7{\pm}0.4$ at 30 min and 3 hr, respectively. On image, lesion-to-background ratio of $^{99m}Tc$-Transferrin was $2.18{\pm}0.03,\;2.56{\pm}0.11,\;3.08{\pm}0.18,\;3.77{\pm}0.17,\;4.70{\pm}0.45\;and\;5.59{\pm}0.40$ at 10 min, 30 min, 1 hr, 2 hr, 4 hr and 10 hr and those of $^{67}Ga$-citrate was $3.06{\pm}0.84,\;4.12{\pm}0.54\;and\;4.55{\pm}0.74 $ at 2 hr, 24 hr and 48 hr, respectively. Conclusion: Transferrin is successfully labeled with $^{99m}Tc$, and its labeling efficiency was higher than 95% and stable for 8 hours. $^{99m}Tc$-Transferrin scintigraphy showed higher image quality in shorter time compared to $^{67}Ga$-citrate image. $^{99m}Tc$-transferrin is supposed to be useful in the detection of the infectious foci.

Text extraction in images using simplify color and edges pattern analysis (색상 단순화와 윤곽선 패턴 분석을 통한 이미지에서의 글자추출)

  • Yang, Jae-Ho;Park, Young-Soo;Lee, Sang-Hun
    • Journal of the Korea Convergence Society
    • /
    • v.8 no.8
    • /
    • pp.33-40
    • /
    • 2017
  • In this paper, we propose a text extraction method by pattern analysis on contour for effective text detection in image. Text extraction algorithms using edge based methods show good performance in images with simple backgrounds, The images of complex background has a poor performance shortcomings. The proposed method simplifies the color of the image by using K-means clustering in the preprocessing process to detect the character region in the image. Enhance the boundaries of the object through the High pass filter to improve the inaccuracy of the boundary of the object in the color simplification process. Then, by using the difference between the expansion and erosion of the morphology technique, the edges of the object is detected, and the character candidate region is discriminated by analyzing the pattern of the contour portion of the acquired region to remove the unnecessary region (picture, background). As a final result, we have shown that the characters included in the candidate character region are extracted by removing unnecessary regions.

Object Feature Extraction and Matching for Effective Multiple Vehicles Tracking (효과적인 다중 차량 추적을 위한 객체 특징 추출 및 매칭)

  • Cho, Du-Hyung;Lee, Seok-Lyong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.2 no.11
    • /
    • pp.789-794
    • /
    • 2013
  • A vehicle tracking system makes it possible to induce the vehicle movement path for avoiding traffic congestion and to prevent traffic accidents in advance by recognizing traffic flow, monitoring vehicles, and detecting road accidents. To track the vehicles effectively, those which appear in a sequence of video frames need to identified by extracting the features of each object in the frames. Next, the identical vehicles over the continuous frames need to be recognized through the matching among the objects' feature values. In this paper, we identify objects by binarizing the difference image between a target and a referential image, and the labelling technique. As feature values, we use the center coordinate of the minimum bounding rectangle(MBR) of the identified object and the averages of 1D FFT(fast Fourier transform) coefficients with respect to the horizontal and vertical direction of the MBR. A vehicle is tracked in such a way that the pair of objects that have the highest similarity among objects in two continuous images are regarded as an identical object. The experimental result shows that the proposed method outperforms the existing methods that use geometrical features in tracking accuracy.

Chemical Imaging Analysis of the Micropatterns of Proteins and Cells Using Cluster Ion Beam-based Time-of-Flight Secondary Ion Mass Spectrometry and Principal Component Analysis

  • Shon, Hyun Kyong;Son, Jin Gyeong;Lee, Kyung-Bok;Kim, Jinmo;Kim, Myung Soo;Choi, Insung S.;Lee, Tae Geol
    • Bulletin of the Korean Chemical Society
    • /
    • v.34 no.3
    • /
    • pp.815-819
    • /
    • 2013
  • Micropatterns of streptavidin and human epidermal carcinoma A431 cells were successfully imaged, as received and without any labeling, using cluster $Au_3{^+}$ ion beam-based time-of-flight secondary ion mass spectrometry (TOF-SIMS) together with a principal component analysis (PCA). Three different analysis ion beams ($Ga^+$, $Au^+$ and $Au_3{^+}$) were compared to obtain label-free TOF-SIMS chemical images of micropatterns of streptavidin, which were subsequently used for generating cell patterns. The image of the total positive ions obtained by the $Au_3{^+}$ primary ion beam corresponded to the actual image of micropatterns of streptavidin, whereas the total positive-ion images by $Ga^+$ or $Au^+$ primary ion beams did not. A PCA of the TOF-SIMS spectra was initially performed to identify characteristic secondary ions of streptavidin. Chemical images of each characteristic ion were reconstructed from the raw data and used in the second PCA run, which resulted in a contrasted - and corrected - image of the micropatterns of streptavidin by the $Ga^+$ and $Au^+$ ion beams. The findings herein suggest that using cluster-ion analysis beams and multivariate data analysis for TOF-SIMS chemical imaging would be an effectual method for producing label-free chemical images of micropatterns of biomolecules, including proteins and cells.

Improved Lung and Pulmonary Vessels Segmentation and Numerical Algorithms of Necrosis Cell Ratio in Lung CT Image (흉부 CT 영상에서 개선된 폐 및 폐혈관 분할과 괴사 세포 비율의 수치적 알고리즘)

  • Cho, Joon-Ho;Moon, Sung-Ryong
    • Journal of Digital Convergence
    • /
    • v.16 no.2
    • /
    • pp.19-26
    • /
    • 2018
  • We proposed a numerical calculation of the proportion of necrotic cells in pulmonary segmentation, pulmonary vessel segmentation lung disease site for diagnosis of lung disease from chest CT images. The first step is to separate the lungs and bronchi by applying a three-dimensional labeling technique from a chest CT image and a three-dimensional region growing method. The second step is to divide the pulmonary vessels by applying the rate of change using the first order polynomial regression, perform noise reduction, and divide the final pulmonary vessels. The third step is to find a disease prediction factor in a two-step image and calculate the proportion of necrotic cells.

An Occupant Sensing System Using Single Video Camera and Ultrasonic Sensor for Advanced Airbag (단일 비디오 카메라와 초음파센서를 이용한 스마트 에어백용 승객 감지 시스템)

  • Bae, Tae-Wuk;Lee, Jong-Won;Ha, Su-Young;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.1
    • /
    • pp.66-75
    • /
    • 2010
  • We proposed an occupant sensing system using single video camera and ultrasonic sensor for the advanced airbag. To detect the occupant form and the face position in real-time, we used the skin color and motion information. We made the candidate face block image using the threshold value of the color difference signal corresponding to skin color and difference value of current image and previous image of luminance signal to gel motion information. And then it detects the face by the morphology and the labeling. In case of night without color and luminance information, it detects the face by using the threshold value of the luminance signal get by infra-red LED instead of the color difference signal. To evaluate the performance of the proposed occupant detection system, it performed various experiments through the setting of the IEEE camera, ultrasonic sensor, and infra-red LED in vehicle jig.

Automated Brain Region Extraction Method in Head MR Image Sets (머리 MR영상에서 자동화된 뇌영역 추출)

  • Cho, Dong-Uk;Kim, Tae-Woo;Shin, Seung-Soo
    • The Journal of the Korea Contents Association
    • /
    • v.2 no.3
    • /
    • pp.1-15
    • /
    • 2002
  • A noel automated brain region extraction method in single channel MR images for visualization and analysis of a human brain is presented. The method generates a volume of brain masks by automatic thresholding using a dual curve fitting technique and by 3D morphological operations. The dual curve fitting can reduce an error in clue fitting to the histogram of MR images. The 3D morphological operations, including erosion, labeling of connected-components, max-feature operation, and dilation, are applied to the cubic volume of masks reconstructed from the thresholded Drain masks. This method can automatically extract a brain region in any displayed type of sequences, including extreme slices, of SPGR, T1-, T2-, and PD-weighted MR image data sets which are not required to contain the entire brain. In the experiments, the algorithm was applied to 20 sets of MR images and showed over 0.97 of similarity index in comparison with manual drawing.

  • PDF

Vision and Depth Information based Real-time Hand Interface Method Using Finger Joint Estimation (손가락 마디 추정을 이용한 비전 및 깊이 정보 기반 손 인터페이스 방법)

  • Park, Kiseo;Lee, Daeho;Park, Youngtae
    • Journal of Digital Convergence
    • /
    • v.11 no.7
    • /
    • pp.157-163
    • /
    • 2013
  • In this paper, we propose a vision and depth information based real-time hand gesture interface method using finger joint estimation. For this, the areas of left and right hands are segmented after mapping of the visual image and depth information image, and labeling and boundary noise removal is performed. Then, the centroid point and rotation angle of each hand area are calculated. Afterwards, a circle is expanded at following pattern from a centroid point of the hand to detect joint points and end points of the finger by obtaining the midway points of the hand boundary crossing and the hand model is recognized. Experimental results that our method enabled fingertip distinction and recognized various hand gestures fast and accurately. As a result of the experiment on various hand poses with the hidden fingers using both hands, the accuracy showed over 90% and the performance indicated over 25 fps. The proposed method can be used as a without contacts input interface in HCI control, education, and game applications.

Enhancement of Tongue Segmentation by Using Data Augmentation (데이터 증강을 이용한 혀 영역 분할 성능 개선)

  • Chen, Hong;Jung, Sung-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.5
    • /
    • pp.313-322
    • /
    • 2020
  • A large volume of data will improve the robustness of deep learning models and avoid overfitting problems. In automatic tongue segmentation, the availability of annotated tongue images is often limited because of the difficulty of collecting and labeling the tongue image datasets in reality. Data augmentation can expand the training dataset and increase the diversity of training data by using label-preserving transformations without collecting new data. In this paper, augmented tongue image datasets were developed using seven augmentation techniques such as image cropping, rotation, flipping, color transformations. Performance of the data augmentation techniques were studied using state-of-the-art transfer learning models, for instance, InceptionV3, EfficientNet, ResNet, DenseNet and etc. Our results show that geometric transformations can lead to more performance gains than color transformations and the segmentation accuracy can be increased by 5% to 20% compared with no augmentation. Furthermore, a random linear combination of geometric and color transformations augmentation dataset gives the superior segmentation performance than all other datasets and results in a better accuracy of 94.98% with InceptionV3 models.

Automatic Recognition of Symbol Objects in P&IDs using Artificial Intelligence (인공지능 기반 플랜트 도면 내 심볼 객체 자동화 검출)

  • Shin, Ho-Jin;Jeon, Eun-Mi;Kwon, Do-kyung;Kwon, Jun-Seok;Lee, Chul-Jin
    • Plant Journal
    • /
    • v.17 no.3
    • /
    • pp.37-41
    • /
    • 2021
  • P&ID((Piping and Instrument Diagram) is a key drawing in the engineering industry because it contains information about the units and instrumentation of the plant. Until now, simple repetitive tasks like listing symbols in P&ID drawings have been done manually, consuming lots of time and manpower. Currently, a deep learning model based on CNN(Convolutional Neural Network) is studied for drawing object detection, but the detection time is about 30 minutes and the accuracy is about 90%, indicating performance that is not sufficient to be implemented in the real word. In this study, the detection of symbols in a drawing is performed using 1-stage object detection algorithms that process both region proposal and detection. Specifically, build the training data using the image labeling tool, and show the results of recognizing the symbol in the drawing which are trained in the deep learning model.