• Title/Summary/Keyword: Image Features

Search Result 3,382, Processing Time 0.033 seconds

Correlation analysis of periodontal tissue dimensions in the esthetic zone using a non-invasive digital method

  • Kim, Yun-Jeong;Park, Ji-Man;Cho, Hyun-Jae;Ku, Young
    • Journal of Periodontal and Implant Science
    • /
    • v.51 no.2
    • /
    • pp.88-99
    • /
    • 2021
  • Purpose: Direct intraoral scanning and superimposing methods have recently been applied to measure the dimensions of periodontal tissues. The aim of this study was to analyze various correlations between labial gingival thickness and underlying alveolar bone thickness, as well as clinical parameters among 3 tooth types (central incisors, lateral incisors, and canines) using a digital method. Methods: In 20 periodontally healthy subjects, cone-beam computed tomography images and intraoral scanned files were obtained. Measurements of labial alveolar bone and gingival thickness at the central incisors, lateral incisors, and canines were performed at points 0-5 mm from the alveolar crest on the superimposed images. Clinical parameters including the crown width/crown length ratio, keratinized gingival width, gingival scallop, and transparency of the periodontal probe through the gingival sulcus were examined. Results: Gingival thickness at the alveolar crest level was positively correlated with the thickness of the alveolar bone plate (P<0.05). The central incisors revealed a strong correlation between labial alveolar bone thickness at 1 and 2 mm, respectively, inferior to the alveolar crest and the thickness of the gingiva at the alveolar crest line (G0), whereas G0 and labial bone thickness at every level were positively correlated in the lateral incisors and canines. No significant correlations were found between clinical parameters and hard or soft tissue thickness. Conclusions: Gingival thickness at the alveolar crest level revealed a positive correlation with labial alveolar bone thickness, although this correlation at identical depth levels was not significant. Gingival thickness, at or under the alveolar crest level, was not associated with the clinical parameters of the gingival features, such as the crown form, gingival scallop, or keratinized gingival width.

Change Attention based Dense Siamese Network for Remote Sensing Change Detection (원격 탐사 변화 탐지를 위한 변화 주목 기반의 덴스 샴 네트워크)

  • Hwang, Gisu;Lee, Woo-Ju;Oh, Seoung-Jun
    • Journal of Broadcast Engineering
    • /
    • v.26 no.1
    • /
    • pp.14-25
    • /
    • 2021
  • Change detection, which finds changes in remote sensing images of the same location captured at different times, is very important because it is used in various applications. However, registration errors, building displacement errors, and shadow errors cause false positives. To solve these problems, we propose a novle deep convolutional network called CADNet (Change Attention Dense Siamese Network). CADNet uses FPN (Feature Pyramid Network) to detect multi-scale changes, applies a Change Attention Module that attends to the changes, and uses DenseNet as a feature extractor to use feature maps that contain both low-level and high-level features for change detection. CADNet performance measured from the Precision, Recall, F1 side is 98.44%, 98.47%, 98.46% for WHU datasets and 90.72%, 91.89%, 91.30% for LEVIR-CD datasets. The results of this experiment show that CADNet can offer better performance than any other traditional change detection method.

Development of 3D Crop Segmentation Model in Open-field Based on Supervised Machine Learning Algorithm (지도학습 알고리즘 기반 3D 노지 작물 구분 모델 개발)

  • Jeong, Young-Joon;Lee, Jong-Hyuk;Lee, Sang-Ik;Oh, Bu-Yeong;Ahmed, Fawzy;Seo, Byung-Hun;Kim, Dong-Su;Seo, Ye-Jin;Choi, Won
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.64 no.1
    • /
    • pp.15-26
    • /
    • 2022
  • 3D open-field farm model developed from UAV (Unmanned Aerial Vehicle) data could make crop monitoring easier, also could be an important dataset for various fields like remote sensing or precision agriculture. It is essential to separate crops from the non-crop area because labeling in a manual way is extremely laborious and not appropriate for continuous monitoring. We, therefore, made a 3D open-field farm model based on UAV images and developed a crop segmentation model using a supervised machine learning algorithm. We compared performances from various models using different data features like color or geographic coordinates, and two supervised learning algorithms which are SVM (Support Vector Machine) and KNN (K-Nearest Neighbors). The best approach was trained with 2-dimensional data, ExGR (Excess of Green minus Excess of Red) and z coordinate value, using KNN algorithm, whose accuracy, precision, recall, F1 score was 97.85, 96.51, 88.54, 92.35% respectively. Also, we compared our model performance with similar previous work. Our approach showed slightly better accuracy, and it detected the actual crop better than the previous approach, while it also classified actual non-crop points (e.g. weeds) as crops.

Radiomics-based Biomarker Validation Study for Region Classification in 2D Prostate Cross-sectional Images (2D 전립선 단면 영상에서 영역 분류를 위한 라디오믹스 기반 바이오마커 검증 연구)

  • Jun Young, Park;Young Jae, Kim;Jisup, Kim;Kwang Gi, Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.1
    • /
    • pp.25-32
    • /
    • 2023
  • Recognizing the size and location of prostate cancer is critical for prostate cancer diagnosis, treatment, and predicting prognosis. This paper proposes a model to classify the tumor region and normal tissue with cross-sectional visual images of prostatectomy tissue. We used specimen images of 44 prostate cancer patients who received prostatectomy at Gachon University Gil Hospital. A total of 289 prostate slice images consist of 200 slices including tumor region and 89 slices not including tumor region. Images were divided based on the presence or absence of tumor, and a total of 93 features from each slice image were extracted using Radiomics: 18 first order, 24 GLCM, 16 GLRLM, 16 GLSZM, 5 NGTDM, and 14 GLDM. We compared feature selection techniques such as LASSO, ANOVA, SFS, Ridge and RF, LR, SVM classifiers for the model's high performances. We evaluated the model's performance with AUC of the ROC curve. The results showed that the combination of feature selection techniques LASSO, Ridge, and classifier RF could be best with an AUC of 0.99±0.005.

Effect of Particle Sphericity on the Rheological Properties of Ti-6Al-4V Powders for Laser Powder Bed Fusion Process (LPBF용 타이타늄 합금 분말의 유변특성에 대한 입자 구형도의 영향)

  • Kim, T.Y.;Kang, M.H.;Kim, J.H.;Hong, J.K.;Yu, J.H.;Lee, J.I.
    • Journal of Powder Materials
    • /
    • v.29 no.2
    • /
    • pp.99-109
    • /
    • 2022
  • Powder flowability is critical in additive manufacturing processes, especially for laser powder bed fusion. Many powder features, such as powder size distribution, particle shape, surface roughness, and chemical composition, simultaneously affect the flow properties of a powder; however, the individual effect of each factor on powder flowability has not been comprehensively evaluated. In this study, the impact of particle shape (sphericity) on the rheological properties of Ti-6Al-4V powder is quantified using an FT4 powder rheometer. Dynamic image analysis is conducted on plasma-atomized (PA) and gas-atomized (GA) powders to evaluate their particle sphericity. PA and GA powders exhibit negligible differences in compressibility and permeability tests, but GA powder shows more cohesive behavior, especially in a dynamic state, because lower particle sphericity facilitates interaction between particles during the powder flow. These results provide guidelines for the manufacturing of advanced metal powders with excellent powder flowability for laser powder bed fusion.

Style Synthesis of Speech Videos Through Generative Adversarial Neural Networks (적대적 생성 신경망을 통한 얼굴 비디오 스타일 합성 연구)

  • Choi, Hee Jo;Park, Goo Man
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.11
    • /
    • pp.465-472
    • /
    • 2022
  • In this paper, the style synthesis network is trained to generate style-synthesized video through the style synthesis through training Stylegan and the video synthesis network for video synthesis. In order to improve the point that the gaze or expression does not transfer stably, 3D face restoration technology is applied to control important features such as the pose, gaze, and expression of the head using 3D face information. In addition, by training the discriminators for the dynamics, mouth shape, image, and gaze of the Head2head network, it is possible to create a stable style synthesis video that maintains more probabilities and consistency. Using the FaceForensic dataset and the MetFace dataset, it was confirmed that the performance was increased by converting one video into another video while maintaining the consistent movement of the target face, and generating natural data through video synthesis using 3D face information from the source video's face.

Vehicle Detection Algorithm Using Super Resolution Based on Deep Residual Dense Block for Remote Sensing Images (원격 영상에서 심층 잔차 밀집 기반의 초고해상도 기법을 이용한 차량 검출 알고리즘)

  • Oh-Seol Kwon
    • Journal of Broadcast Engineering
    • /
    • v.28 no.1
    • /
    • pp.124-131
    • /
    • 2023
  • Object detection techniques are increasingly used to obtain information on physical characteristics or situations of a specific area from remote images. The accuracy of object detection is decreased in remote sensing images with low resolution because the low resolution reduces the amount of detail that can be captured in an image. A single neural network is proposed to joint the super-resolution method and object detection method. The proposed method constructs a deep residual-based network to restore object features in low-resolution images. Moreover, the proposed method is used to improve the performance of object detection by jointing a single network with YOLOv5. The proposed method is experimentally tested using VEDAI data for low-resolution images. The results show that vehicle detection performance improved by 81.38% on mAP@0.5 for VISIBLE data.

A Method for Region-Specific Anomaly Detection on Patch-wise Segmented PA Chest Radiograph (PA 흉부 X-선 영상 패치 분할에 의한 지역 특수성 이상 탐지 방법)

  • Hyun-bin Kim;Jun-Chul Chun
    • Journal of Internet Computing and Services
    • /
    • v.24 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • Recently, attention to the pandemic situation represented by COVID-19 emerged problems caused by unexpected shortage of medical personnel. In this paper, we present a method for diagnosing the presence or absence of lesional sign on PA chest X-ray images as computer vision solution to support diagnosis tasks. Method for visual anomaly detection based on feature modeling can be also applied to X-ray images. With extracting feature vectors from PA chest X-ray images and divide to patch unit, region-specific abnormality can be detected. As preliminary experiment, we created simulation data set containing multiple objects and present results of the comparative experiments in this paper. We present method to improve both efficiency and performance of the process through hard masking of patch features to aligned images. By summing up regional specificity and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to previous studies. By aggregating region-specific and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to our last study.

Digital Technology and Fashion Features in the Contents of Korean Virtual Idol Groups (한국 가상 아이돌 그룹의 콘텐츠에 나타난 디지털 기술 및 패션의 특징)

  • JIAYI XUE;Seunghee Suh
    • Journal of Fashion Business
    • /
    • v.27 no.1
    • /
    • pp.110-125
    • /
    • 2023
  • Virtual idol groups are a product of changes in cultural content and development of digital technology. The purpose of this study is to derive the characteristics of technical expression and fashion of virtual idol groups of Korean entertainment companies, and the significance of this study is to provide basic data for creating a new content business model for virtual idol groups. The research method of this study consisted of literature research and case analysis. Korean virtual idol groups 'K/DA', 'Aespa', and 'Eternity', which show the evolved business model of the entertainment industry through rapid advances in digital technology, were selected as the subject of case analysis for this study, and newspaper articles were searched by keywords and analyzed. As a result of the study, the technical expressions shown in Korean virtual idol groups were 'implementation of realistic content through interaction technology', 'delicate motion expression through motion capture technology', and 'convergence of information between the real world and virtual world through AR technology', 'provision of experience similar to reality by VR technology' and 'formation of cultural contents by Deep Real technology' were deriven. In addition, the characteristics of the Korean virtual guide idol group's fashion are 'marketing strategy through collaboration with fashion items', 'giving recognition as a digital fashion icon of real existence', 'creating a sensuous image as a fashion brand ambassador' and 'fashion style expression of the Z generation's sensibility'.

HDR Video Reconstruction via Content-based Alignment Network (내용 기반의 정렬을 통한 HDR 동영상 생성 방법)

  • Haesoo Chung;Nam Ik Cho
    • Journal of Broadcast Engineering
    • /
    • v.28 no.2
    • /
    • pp.185-193
    • /
    • 2023
  • As many different over-the-top (OTT) services become ubiquitous, demands for high-quality content are increasing. However, high dynamic range (HDR) contents, which can provide more realistic scenes, are still insufficient. In this regard, we propose a new HDR video reconstruction technique using multi-exposure low dynamic range (LDR) videos. First, we align a reference and its neighboring frames to compensate for motions between them. In the alignment stage, we perform content-based alignment to improve accuracy, and we also present a high-resolution (HR) module to enhance details. Then, we merge the aligned features to generate a final HDR frame. Experimental results demonstrate that our method outperforms existing methods.