• Title/Summary/Keyword: Root area in pixels

Search Result 10, Processing Time 0.019 seconds

Assessment of dental age estimation using dentinal translucency in ground sections of single rooted teeth: a digital image analysis

  • Abelene Maria Durand;Madhu Narayan;Raghavendhar Karthik;Rajkumar Krishnan;Narasimhan Srinivasan;Dinesh Kumar
    • Anatomy and Cell Biology
    • /
    • v.57 no.2
    • /
    • pp.271-277
    • /
    • 2024
  • Human dentition is unique to individuals and helps in identification of individuals in forensic odontology. This study proposes to study the manually ground sections of single rooted teeth using digital methods for dental age estimation. To assess the dentinal translucency from the scanned digital images of manually ground section of teeth using commercially available image edition software. Corroborating the root dentinal translucency length and region of interest (ROI) of translucency zone in pixels (as a marker of dental age) with the chronological age of the subject, as stratified by different age groups. Twenty single-rooted extracted teeth from 20 patients each from 6 groups divided as per age. Manual sectioning of the teeth followed by scanning the sections was done. Root area in pixels and ROI of translucency zone were measured. From the observed values, translucency length percentage (TLP) and percentage of ROI in pixels (TPP) was calculated and tabulated. Pearson's correlation coefficients were obtained for age with TLP and TPP. Positive correlation existed between age and TLP and also between age and TPP. With the obtained data, multilinear regression equations for specific age groups based on 10-year intervals were derived. By a step-down analysis method, age was estimated with an average error of around ±7.9 years. This study gives a novel method for age-estimation that can be applied in real-time forensic sciences.

AN EXPERIMENTAL STUDY ON THE TOOTH ROOT RESORPTION FOR DIGITAL RADIOGRAPHY (디지털 방사선 촬영술을 이용한 치근 흡수 판독에 관한 실험적 연구)

  • Oh Phill-Gyo;Kim Jae-Duk
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.25 no.2
    • /
    • pp.375-387
    • /
    • 1995
  • The purpose of this study was to quantitatively evaluated experimental tooth root resorption for digital radiography. For this study, experimentally three root sites were used, and radiograms were taken with standardized apparatus. Digital imaging system were consisted of NEC PC-980l(computer), TRINITRON(monitor), SONY XC-711 CCD camera. The display monitor had a resolution of 512X512 pixels. The obtained results were as follows: 1. In the difference of the four X-ray film of the contrast correction, the contrast difference was one gray scale variation at mean value. 2. Viewing of the view box of the periapical radiographs, experimental tooth root resorption of the periapical area of the first premolar, middle of mesial surface of the first molar mesial root, middle of lingual surface of the first molar distal root were recognized by increased diameter. 3. On the analysis by histogram, the periapical area of the first premolar, the middle of mesial surface of the first molar mesial root were each recognized tooth root resorption of the 5,6,7 pixel, 2,4,5 pixel by increased diameter. 4. On the analysis by histogram, the middle of lingual surface of the first molar distal root was each recognized tooth root resorption of the none, 3,6 pixel by increased diameter.

  • PDF

Generation of the KOMPSAT-2 Ortho Mosaic Imagery on the Korean Peninsula (아리랑위성 2호 한반도 정사모자이크영상 제작)

  • Lee, Kwang-Jae;Yyn, Hee-Cheon;Kim, Youn-Soo
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.16 no.3
    • /
    • pp.103-114
    • /
    • 2013
  • In this study, we established the ortho mosaic imagery on the Korean Peninsula using KOMPSAT-2 images and conducted an accuracy assessment. Rational Polynomial Coefficient(RPC) modeling results were mostly less than 2 pixels except for mountainous regions which was difficult to select a Ground Control Point(GCP). Digital Elevation Model(DEM) which was made using the digital topographic map on the scale of 1:5,000 was used for generating an ortho image. In the case of inaccessible area, the Shuttle Radar Topography Mission(SRTM) DEM was used. Meanwhile, the ortho mosaic image of the Korean Peninsula was produced by each ortho image aggregation and color adjustment. An accuracy analysis for the mosaic image was conducted about a 1m color fusion image. In order to verify a geolocation accuracy, 813 check points which were acquired by field survey in South Korea were used. We found that the maximum error was not to exceed 5m(Root Mean Square Error : RMSE). On the other hand, in the case of inaccessible area, the extracted check points from a reference image were used for accuracy analysis. Approximately 69% of the image has a positional accuracy of less than 3m(RMSE). We found that the seam-line accuracy among neighboring image was very high through visual inspection. However, there were a discrepancy with 1 to 2 pixels at some mountainous regions.

Image Registration of Drone Images through Association Analysis of Linear Features (선형정보의 연관분석을 통한 드론영상의 영상등록)

  • Choi, Han Seung;Kim, Eui Myoung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.35 no.6
    • /
    • pp.441-452
    • /
    • 2017
  • Drones are increasingly being used to investigate disaster damage because they can quickly capture images in the air. It is necessary to extract the damaged area by registering the drones and the existing ortho-images in order to investigate the disaster damage. In this process, we might be faced the problem of registering two images with different time and spatial resolution. In order to solve this problem, we propose a new methodology that performs initial image transformation using line pairs extracted from images and association matrix, and final registration of images using linear features to refine the initial transformed result. The applicability of the newly proposed methodology in this study was evaluated through experiments using artifacts and the natural terrain areas. Experimental results showed that the root mean square error of artifacts and the natural terrain was 1.29 pixels and 4.12 pixels, respectively, and relatively high accuracy was obtained in the region with artifacts extracted a lot of linear information.

Fine-image Registration between Multi-sensor Satellite Images for Global Fusion Application of KOMPSAT-3·3A Imagery (KOMPSAT-3·3A 위성영상 글로벌 융합활용을 위한 다중센서 위성영상과의 정밀영상정합)

  • Kim, Taeheon;Yun, Yerin;Lee, Changhui;Han, Youkyung
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_4
    • /
    • pp.1901-1910
    • /
    • 2022
  • Arriving in the new space age, securing technology for fusion application of KOMPSAT-3·3A and global satellite images is becoming more important. In general, multi-sensor satellite images have relative geometric errors due to various external factors at the time of acquisition, degrading the quality of the satellite image outputs. Therefore, we propose a fine-image registration methodology to minimize the relative geometric error between KOMPSAT-3·3A and global satellite images. After selecting the overlapping area between the KOMPSAT-3·3A and foreign satellite images, the spatial resolution between the two images is unified. Subsequently, tie-points are extracted using a hybrid matching method in which feature- and area-based matching methods are combined. Then, fine-image registration is performed through iterative registration based on pyramid images. To evaluate the performance and accuracy of the proposed method, we used KOMPSAT-3·3A, Sentinel-2A, and PlanetScope satellite images acquired over Daejeon city, South Korea. As a result, the average RMSE of the accuracy of the proposed method was derived as 1.2 and 3.59 pixels in Sentinel-2A and PlanetScope images, respectively. Consequently, it is considered that fine-image registration between multi-sensor satellite images can be effectively performed using the proposed method.

Estimation of the Lodging Area in Rice Using Deep Learning (딥러닝을 이용한 벼 도복 면적 추정)

  • Ban, Ho-Young;Baek, Jae-Kyeong;Sang, Wan-Gyu;Kim, Jun-Hwan;Seo, Myung-Chul
    • KOREAN JOURNAL OF CROP SCIENCE
    • /
    • v.66 no.2
    • /
    • pp.105-111
    • /
    • 2021
  • Rice lodging is an annual occurrence caused by typhoons accompanied by strong winds and strong rainfall, resulting in damage relating to pre-harvest sprouting during the ripening period. Thus, rapid estimations of the area of lodged rice are necessary to enable timely responses to damage. To this end, we obtained images related to rice lodging using a drone in Gimje, Buan, and Gunsan, which were converted to 128 × 128 pixels images. A convolutional neural network (CNN) model, a deep learning model based on these images, was used to predict rice lodging, which was classified into two types (lodging and non-lodging), and the images were divided in a 8:2 ratio into a training set and a validation set. The CNN model was layered and trained using three optimizers (Adam, Rmsprop, and SGD). The area of rice lodging was evaluated for the three fields using the obtained data, with the exception of the training set and validation set. The images were combined to give composites images of the entire fields using Metashape, and these images were divided into 128 × 128 pixels. Lodging in the divided images was predicted using the trained CNN model, and the extent of lodging was calculated by multiplying the ratio of the total number of field images by the number of lodging images by the area of the entire field. The results for the training and validation sets showed that accuracy increased with a progression in learning and eventually reached a level greater than 0.919. The results obtained for each of the three fields showed high accuracy with respect to all optimizers, among which, Adam showed the highest accuracy (normalized root mean square error: 2.73%). On the basis of the findings of this study, it is anticipated that the area of lodged rice can be rapidly predicted using deep learning.

Comparison of Multi-angle TerraSAR-X Staring Mode Image Registration Method through Coarse to Fine Step (Coarse to Fine 단계를 통한 TerraSAR-X Staring Mode 다중 관측각 영상 정합기법 비교 분석)

  • Lee, Dongjun;Kim, Sang-Wan
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.3
    • /
    • pp.475-491
    • /
    • 2021
  • With the recent increase in available high-resolution (< ~1 m) satellite SAR images, the demand for precise registration of SAR images is increasing in various fields including change detection. The registration between high-resolution SAR images acquired in different look angle is difficult due to speckle noise and geometric distortion caused by the characteristics of SAR images. In this study, registration is performed in two stages, coarse and fine, using the x-band SAR data imaged at staring spotlight mode of TerraSAR-X. For the coarse registration, a method combining the adaptive sampling method and SAR-SIFT (Scale Invariant Feature Transform) is applied, and three rigid methods (NCC: Normalized Cross Correlation, Phase Congruency-NCC, MI: Mutual Information) and one non-rigid (Gefolki: Geoscience extended Flow Optical Flow Lucas-Kanade Iterative), for the fine registration stage, was performed for performance comparison. The results were compared by using RMSE (Root Mean Square Error) and FSIM (Feature Similarity) index, and all rigid models showed poor results in all image combinations. It is confirmed that the rigid models have a large registration error in the rugged terrain area. As a result of applying the Gefolki algorithm, it was confirmed that the RMSE of Gefolki showed the best result as a 1~3 pixels, and the FSIM index also obtained a higher value than 0.02~0.03 compared to other rigid methods. It was confirmed that the mis-registration due to terrain effect could be sufficiently reduced by the Gefolki algorithm.

A study of artificial neural network for in-situ air temperature mapping using satellite data in urban area (위성 정보를 활용한 도심 지역 기온자료 지도화를 위한 인공신경망 적용 연구)

  • Jeon, Hyunho;Jeong, Jaehwan;Cho, Seongkeun;Choi, Minha
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.11
    • /
    • pp.855-863
    • /
    • 2022
  • In this study, the Artificial Neural Network (ANN) was used to mapping air temperature in Seoul. MODerate resolution Imaging Spectroradiomter (MODIS) data was used as auxiliary data for mapping. For the ANN network topology optimizing, scatterplots and statistical analysis were conducted, and input-data was classified and combined that highly correlated data which surface temperature, Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), time (satellite observation time, Day of year), location (latitude, hardness), and data quality (cloudness). When machine learning was conducted only with data with a high correlation with air temperature, the average values of correlation coefficient (r) and Root Mean Squared Error (RMSE) were 0.967 and 2.708℃. In addition, the performance improved as other data were added, and when all data were utilized the average values of r and RMSE were 0.9840 and 1.883℃, which showed the best performance. In the Seoul air temperature map by the ANN model, the air temperature was appropriately calculated for each pixels topographic characteristics, and it will be possible to analyze the air temperature distribution in city-level and national-level by expanding research areas and diversifying satellite data.

An Estimation of the Composite Sea Surface Temperature using COMS and Polar Orbit Satellites Data in Northwest Pacific Ocean (천리안 위성과 극궤도 위성 자료를 이용한 북서태평양 해역의 합성 해수면온도 산출)

  • Kim, Tae-Myung;Chung, Sung-Rae;Chung, Chu-Yong;Baek, Seonkyun
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.3
    • /
    • pp.275-285
    • /
    • 2017
  • National Meteorological Satellite Center(NMSC) has produced Sea Surface Temperature (SST) using Communication, Ocean, and Meteorological Satellite(COMS) data since April 2011. In this study, we have developed a new regional COMS SST algorithm optimized within the North-West Pacific Ocean area based on the Multi-Channel SST(MCSST) method and made a composite SST using polar orbit satellites as well as the COMS data. In order to retrieve the optimized SST at Northwest Pacific, we carried out a colocation process of COMS and in-situ buoy data to make coefficients of the MCSST algorithm through the new cloud masking including contaminant pixels and quality control processes of buoy data. And then, we have estimated the composite SST through the optimal interpolation method developed by National Institute of Meteorological Science(NIMS). We used four satellites SST data including COMS, NOAA-18/19(National Oceanic and Atmospheric Administration-18/19), and GCOM-W1(Global Change Observation Mission-Water 1). As a result, the root mean square error ofthe composite SST for the period of July 2012 to June 2013 was $0.95^{\circ}C$ in comparison with in-situ buoy data.

Improvement and Validation of Convective Rainfall Rate Retrieved from Visible and Infrared Image Bands of the COMS Satellite (COMS 위성의 가시 및 적외 영상 채널로부터 복원된 대류운의 강우강도 향상과 검증)

  • Moon, Yun Seob;Lee, Kangyeol
    • Journal of the Korean earth science society
    • /
    • v.37 no.7
    • /
    • pp.420-433
    • /
    • 2016
  • The purpose of this study is to improve the calibration matrixes of 2-D and 3-D convective rainfall rates (CRR) using the brightness temperature of the infrared $10.8{\mu}m$ channel (IR), the difference of brightness temperatures between infrared $10.8{\mu}m$ and vapor $6.7{\mu}m$ channels (IR-WV), and the normalized reflectance of the visible channel (VIS) from the COMS satellite and rainfall rate from the weather radar for the period of 75 rainy days from April 22, 2011 to October 22, 2011 in Korea. Especially, the rainfall rate data of the weather radar are used to validate the new 2-D and 3-DCRR calibration matrixes suitable for the Korean peninsula for the period of 24 rainy days in 2011. The 2D and 3D calibration matrixes provide the basic and maximum CRR values ($mm\;h^{-1}$) by multiplying the rain probability matrix, which is calculated by using the number of rainy and no-rainy pixels with associated 2-D (IR, IR-WV) and 3-D (IR, IR-WV, VIS) matrixes, by the mean and maximum rainfall rate matrixes, respectively, which is calculated by dividing the accumulated rainfall rate by the number of rainy pixels and by the product of the maximum rain rate for the calibration period by the number of rain occurrences. Finally, new 2-D and 3-D CRR calibration matrixes are obtained experimentally from the regression analysis of both basic and maximum rainfall rate matrixes. As a result, an area of rainfall rate more than 10 mm/h is magnified in the new ones as well as CRR is shown in lower class ranges in matrixes between IR brightness temperature and IR-WV brightness temperature difference than the existing ones. Accuracy and categorical statistics are computed for the data of CRR events occurred during the given period. The mean error (ME), mean absolute error (MAE), and root mean squire error (RMSE) in new 2-D and 3-D CRR calibrations led to smaller than in the existing ones, where false alarm ratio had decreased, probability of detection had increased a bit, and critical success index scores had improved. To take into account the strong rainfall rate in the weather events such as thunderstorms and typhoon, a moisture correction factor is corrected. This factor is defined as the product of the total precipitable waterby the relative humidity (PW RH), a mean value between surface and 500 hPa level, obtained from a numerical model or the COMS retrieval data. In this study, when the IR cloud top brightness temperature is lower than 210 K and the relative humidity is greater than 40%, the moisture correction factor is empirically scaled from 1.0 to 2.0 basing on PW RH values. Consequently, in applying to this factor in new 2D and 2D CRR calibrations, the ME, MAE, and RMSE are smaller than the new ones.