• Title/Summary/Keyword: National Images

Search Result 6,931, Processing Time 0.032 seconds

The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model (컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향)

  • Kim, Min Jeong;Kim, Jung Hun;Park, Ji Eun;Jeong, Woo Yeon;Lee, Jong Min
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

Effect of Voxel Size on the Accuracy of Landmark Identification in Cone-Beam Computed Tomography Images

  • Lee, Kyung-Min;Davami, Kamran;Hwang, Hyeon-Shik;Kang, Byung-Cheol
    • Journal of Korean Dental Science
    • /
    • v.12 no.1
    • /
    • pp.20-28
    • /
    • 2019
  • Purpose: This study was performed to evaluate the effect of voxel size on the accuracy of landmark identification in cone-beam computed tomography (CBCT) images. Materials and Methods: CBCT images were obtained from 15 dry human skulls with two different voxel sizes; 0.39 mm and 0.10 mm. Three midline landmarks and eight bilateral landmarks were identified by 5 examiners and were recorded as three-dimensional coordinates. In order to compare the accuracy of landmark identification between large and small voxel size images, the difference between best estimate (average value of 5 examiners' measurements) and each examiner's value were calculated and compared between the two images. Result: Landmark identification errors showed a high variability according to the landmarks in case of large voxel size images. The small voxel size images showed small errors in all landmarks. The landmark identification errors were smaller for all landmarks in the small voxel size images than in the large voxel size images. Conclusion: The results of the present study indicate that landmark identification errors could be reduced by using smaller voxel size scan in CBCT images.

Neural correlates of the aesthetic experience using the fractal images : an fMRI study (프랙탈 이미지를 이용하여 본 미적 경험의 뇌 활성화: 기능적 자기공명영상 연구)

  • Lee, Seung-Bok;Jung, Woo-Hyun;Son, Jung-Woo;Jo, Seong-Woo
    • Science of Emotion and Sensibility
    • /
    • v.14 no.3
    • /
    • pp.403-414
    • /
    • 2011
  • The current study examined brain regions associated with aesthetic experience to fractal images using functional MRI. The aesthetic estimations of the images showed that there is a general consensus regarding the perception of beautiful images. Out of 270 fractal images, fifty images rated highest(beautiful images) and fifty images rated lowest(non-beautiful images) were selected and presented to the participants. The two conditions were presented using the block design. Frontal lobes, cingulate gyri, and insula, the areas related to the cognitive and emotional processing in aesthetic experience, were activated when beautiful images were presented. In contrast, the middle occipital gyri and precuneus, the areas associated with experience of negative emotions, were activated when non-beautiful images were presented. The conjunction analysis showed activations in temporal areas in response to beautiful images and activations in parietal areas in response to non-beautiful images. These results indicate that beautiful images elicit semantic interpretations whereas non-beautiful images facilitate abstract processes.

  • PDF

The Design Characteristics of the Figure Skating Costumes for Competitions (경기용 피겨 스케이팅 의상의 디자인 특성)

  • Kim, Ji-Seon;Yum, Hae-Jung
    • Journal of the Korean Society of Costume
    • /
    • v.58 no.10
    • /
    • pp.21-36
    • /
    • 2008
  • This study intended to analyze the morphological characteristics and images of figure skating costume designs in order to grope the figure skating costume designs that can effectively demonstrate beauty in actual competitions. The study was implemented on the figure skating costumes of Ladies medalists in 4 largest international competitions held in 2005. The morphological elements of the costumes for Ladies include lines and colors, textures, details and accessories and these are used in designing for visual effects of movements and for maximum expressions of program images. The images that account for the largest percentage of Ladies figure skating costumes were shown to be elegant images followed by sexy images, luxury images and girlish images in the order of precedence. Overwhelmingly many refined and sexy images were presented in 2005 season and in 2006 season appeared along with them, many costumes with individual and gorgeous images. In 2007 season appeared many costumes with matured and exotic images and in 2008 began to appear many refined and elegant costumes. The images of costumes show slight differences among players enabling the taste of each player for figure skating costumes to be guessed.

Quality grading of Hanwoo (Korean native cattle breed) sub-images using convolutional neural network

  • Kwon, Kyung-Do;Lee, Ahyeong;Lim, Jongkuk;Cho, Soohyun;Lee, Wanghee;Cho, Byoung-Kwan;Seo, Youngwook
    • Korean Journal of Agricultural Science
    • /
    • v.47 no.4
    • /
    • pp.1109-1122
    • /
    • 2020
  • The aim of this study was to develop a marbling classification and prediction model using small parts of sirloin images based on a deep learning algorithm, namely, a convolutional neural network (CNN). Samples were purchased from a commercial slaughterhouse in Korea, images for each grade were acquired, and the total images (n = 500) were assigned according to their grade number: 1++, 1+, 1, and both 2 & 3. The image acquisition system consists of a DSLR camera with a polarization filter to remove diffusive reflectance and two light sources (55 W). To correct the distorted original images, a radial correction algorithm was implemented. Color images of sirloins of Hanwoo (mixed with feeder cattle, steer, and calf) were divided and sub-images with image sizes of 161 × 161 were made to train the marbling prediction model. In this study, the convolutional neural network (CNN) has four convolution layers and yields prediction results in accordance with marbling grades (1++, 1+, 1, and 2&3). Every single layer uses a rectified linear unit (ReLU) function as an activation function and max-pooling is used for extracting the edge between fat and muscle and reducing the variance of the data. Prediction accuracy was measured using an accuracy and kappa coefficient from a confusion matrix. We summed the prediction of sub-images and determined the total average prediction accuracy. Training accuracy was 100% and the test accuracy was 86%, indicating comparably good performance using the CNN. This study provides classification potential for predicting the marbling grade using color images and a convolutional neural network algorithm.

Quantitative Morphology of High Redshift Galaxies Using GALEX Ultraviolet Images of Nearby Galaxies

  • Yeom, Bum-Suk;Rey, Soo-Chang;Kim, Young-Kwang;Kim, Suk;Lee, Young-Dae
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.36 no.2
    • /
    • pp.73.1-73.1
    • /
    • 2011
  • An understanding of the ultraviolet (UV) properties of nearby galaxies is essential for interpreting images of high redshift systems. In this respect, the prediction of optical-band morphologies at high redshifts requires UV images of local galaxies with various morphologies. We present the simulated optical images of galaxies at high redshifts using diverse and high-quality UV images of nearby galaxies obtained through the Galaxy Evolution Explorer (GALEX). We measured CAS (concentration, asymmetry, clumpiness) as well as Gini/M20 parameters of galaxies at near-ultraviolet (NUV) and simulated optical images to quantify effects of redshift on the appearance of distant stellar systems. We also discuss the change of morphological parameters with redshift.

  • PDF

How to utilize vegetation survey using drone image and image analysis software

  • Han, Yong-Gu;Jung, Se-Hoon;Kwon, Ohseok
    • Journal of Ecology and Environment
    • /
    • v.41 no.4
    • /
    • pp.114-119
    • /
    • 2017
  • This study tried to analyze error range and resolution of drone images using a rotary wing by comparing them with field measurement results and to analyze stands patterns in actual vegetation map preparation by comparing drone images with aerial images provided by National Geographic Information Institute of Korea. A total of 11 ground control points (GCPs) were selected in the area, and coordinates of the points were identified. In the analysis of aerial images taken by a drone, error per pixel was analyzed to be 0.284 cm. Also, digital elevation model (DEM), digital surface model (DSM), and orthomosaic image were abstracted. When drone images were comparatively analyzed with coordinates of ground control points (GCPs), root mean square error (RMSE) was analyzed as 2.36, 1.37, and 5.15 m in the direction of X, Y, and Z. Because of this error, there were some differences in locations between images edited after field measurement and images edited without field measurement. Also, drone images taken in the stream and the forest and 51 and 25 cm resolution aerial images provided by the National Geographic Information Institute of Korea were compared to identify stands patterns. To have a standard to classify polygons according to each aerial image, image analysis software (eCognition) was used. As a result, it was analyzed that drone images made more precise polygons than 51 and 25 cm resolution images provided by the National Geographic Information Institute of Korea. Therefore, if we utilize drones appropriately according to characteristics of subject, we can have advantages in vegetation change survey and general monitoring survey as it can acquire detailed information and can take images continuously.

A Method of Forensic Authentication via File Structure and Media Log Analysis of Digital Images Captured by iPhone (아이폰으로 촬영된 디지털 이미지의 파일 구조 및 미디어 로그 분석을 통한 법과학적 진본 확인 방법)

  • Park, Nam In;Lee, Ji Woo;Jeon, Oc-Yeub;Kim, Yong Jin;Lee, Jung Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.4
    • /
    • pp.558-568
    • /
    • 2021
  • The digital image to be accepted as legal evidence, it is important to verify the authentication of the digital image. This study proposes a method of authenticating digital images through three steps of comparing the file structure of digital images taken with iPhone, analyzing the encoding information as well as media logs of the iPhone storing the digital images. For the experiment, digital image samples were acquired from nine iPhones through a camera application built into the iPhone. And the characteristics of file structure and media log were compared between digital images generated on the iPhone and digital images edited through a variety of image editing tools. As a result of examining those registered during the digital image creation process, it was confirmed that differences from the original characteristics occurred in file structure and media logs when manipulating digital images on the iPhone, and digital images take with the iPhone. In this way, it shows that it can prove its forensic authentication in iPhone.

Synthetic Computed Tomography Generation while Preserving Metallic Markers for Three-Dimensional Intracavitary Radiotherapy: Preliminary Study

  • Jin, Hyeongmin;Kang, Seonghee;Kang, Hyun-Cheol;Choi, Chang Heon
    • Progress in Medical Physics
    • /
    • v.32 no.4
    • /
    • pp.172-178
    • /
    • 2021
  • Purpose: This study aimed to develop a deep learning architecture combining two task models to generate synthetic computed tomography (sCT) images from low-tesla magnetic resonance (MR) images to improve metallic marker visibility. Methods: Twenty-three patients with cervical cancer treated with intracavitary radiotherapy (ICR) were retrospectively enrolled, and images were acquired using both a computed tomography (CT) scanner and a low-tesla MR machine. The CT images were aligned to the corresponding MR images using a deformable registration, and the metallic dummy source markers were delineated using threshold-based segmentation followed by manual modification. The deformed CT (dCT), MR, and segmentation mask pairs were used for training and testing. The sCT generation model has a cascaded three-dimensional (3D) U-Net-based architecture that converts MR images to CT images and segments the metallic marker. The performance of the model was evaluated with intensity-based comparison metrics. Results: The proposed model with segmentation loss outperformed the 3D U-Net in terms of errors between the sCT and dCT. The structural similarity score difference was not significant. Conclusions: Our study shows the two-task-based deep learning models for generating the sCT images using low-tesla MR images for 3D ICR. This approach will be useful to the MR-only workflow in high-dose-rate brachytherapy.

Tensile Properties Estimation Method Using Convolutional LSTM Model

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • Journal of the Korea Society of Computer and Information
    • /
    • v.23 no.11
    • /
    • pp.43-49
    • /
    • 2018
  • In this paper, we propose a displacement measurement method based on deep learning using image data obtained from tensile tests of a material specimen. We focus on the fact that the sequential images during the tension are generated and the displacement of the specimen is represented in the image data. So, we designed sample generation model which makes sequential images of specimen. The behavior of generated images are similar to the real specimen images under tensile force. Using generated images, we trained and validated our model. In the deep neural network, sequential images are assigned to a multi-channel input to train the network. The multi-channel images are composed of sequential images obtained along the time domain. As a result, the neural network learns the temporal information as the images express the correlation with each other along the time domain. In order to verify the proposed method, we conducted experiments by comparing the deformation measuring performance of the neural network changing the displacement range of images.