• Title/Summary/Keyword: image processing technique

Search Result 1,679, Processing Time 0.032 seconds

THE EFFECT OF GOLD ELECTROFORMING PROCEDURE ON GOLD-SILVER-PALLADIUM ALLOY

  • Hwang, Bo-Yeon;Kim, Chang-Whe;Lim, Young-Jun;Kim, Myung-Joo
    • The Journal of Korean Academy of Prosthodontics
    • /
    • v.45 no.3
    • /
    • pp.303-309
    • /
    • 2007
  • Statement of problem. The effect of gold electroforming on gold alloy was not studied. Purpose. This in vitro study investigate the effect of gold electroforming on gold-silver-palladium alloy. Material and methods. Three pieces of gold strips had undergone the electroforming procedures on one side and then half of the side again electroformed. The set mode for this study was program 1 ($200{\mu}m$). And the processing time was 15min (1/20 time to form $200{\mu}m$ coping). The confocal laser scanning microscope (PASCAL 5, Carl Zeiss, Bernried, Germany) was used to measure the thickness of the pure gold layer electroformed on the gold strips. Half of the gold strip was coated two times with electroformed gold, and the other half one time. The data from the cone focal laser system was processed to get the vertical profile of the strips and the difference of the vertical height between the double coated and single coated layer was regarded as the thickness of the gold coating. The layer thickness value to built 3D image of the cone-focal laser was set $0.5{\mu}m$. Next to the measurement of the thickness of the coating, the Vicker's hardness test was done. It was performed on the double coated surface, single coated surface and non-coated surface (back side) three times each. Results. The mean thickness value gained from gold electroforming technique was measured to be $22{\mu}m$ for sample 1, $23{\mu}m$ for sample 2, $21{\mu}m$ for sample 3. In the same condition of time, power and the amount of electrolyte, the data showed no difference between samples. According to the results of variance analysis, the differences among the variations in number of coating were statistically insignificant (p>0.05), meaning that the two times of gold electroforming coating did not change the hardness of gold-silver-palladium alloy. Conclusion. The test of thickness of gold coating proved the coherency of the gold electroforming procedure, in other words, when the power, the exposed surface area, processing time and the amount of electrolytes were set same, the same thickness of gold would be coated on. The hardness test showed that the electroformed gold coating did not change the hardness of the gold-silver-palladium alloy when it is coated not more than $45{\mu}m$.

Multiscale Finite Element Analysis of Needle-Punched C/SiC Composites through Subcell Modeling (서브셀 모델링을 통한 니들 펀치 C/SiC 복합재료의 멀티스케일 유한요소해석)

  • Lim, Hyoung Jun;Choi, Ho-Il;Lee, Min-Jung;Yun, Gun Jin
    • Journal of the Computational Structural Engineering Institute of Korea
    • /
    • v.34 no.1
    • /
    • pp.51-58
    • /
    • 2021
  • In this paper, a multi-scale finite element (FE) modeling methodology for three-dimensional (3D) needle-punched (NP) C/SiC with a complex microstructure is presented. The variations of the material properties induced by the needle-punching process and complex geometrical features could pose challenges when estimating the material behavior. For considering these features of composites, a 3D microscopic FE approach is introduced based on micro-CT technology to produce a 3D high fidelity FE model. The image processing techniques of micro-CT are utilized to generate discrete-gray images and reconstruct the high fidelity model. Furthermore, a subcell modeling technique is developed for the 3D NP C/SiC based on the high fidelity FE model to expand to the macro-scale structural problem. A numerical homogenization approach under periodic boundary conditions (PBCs) is employed to estimate the equivalent behavior of the high fidelity model and effective properties of subcell components, considering geometry continuity effects. For verification, proposed models compare excellently with experimental results for the mechanical behavior of tensile, shear, and bending under static loading conditions.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

D4AR - A 4-DIMENSIONAL AUGMENTED REALITY - MODEL FOR AUTOMATION AND VISUALIZATION OF CONSTRUCTION PROGRESS MONITORING

  • Mani Golparvar-Fard;Feniosky Pena-Mora
    • International conference on construction engineering and project management
    • /
    • 2009.05a
    • /
    • pp.30-31
    • /
    • 2009
  • Early detection of schedule delay in field construction activities is vital to project management. It provides the opportunity to initiate remedial actions and increases the chance of controlling such overruns or minimizing their impacts. This entails project managers to design, implement, and maintain a systematic approach for progress monitoring to promptly identify, process and communicate discrepancies between actual and as-planned performances as early as possible. Despite importance, systematic implementation of progress monitoring is challenging: (1) Current progress monitoring is time-consuming as it needs extensive as-planned and as-built data collection; (2) The excessive amount of work required to be performed may cause human-errors and reduce the quality of manually collected data and since only an approximate visual inspection is usually performed, makes the collected data subjective; (3) Existing methods of progress monitoring are also non-systematic and may also create a time-lag between the time progress is reported and the time progress is actually accomplished; (4) Progress reports are visually complex, and do not reflect spatial aspects of construction; and (5) Current reporting methods increase the time required to describe and explain progress in coordination meetings and in turn could delay the decision making process. In summary, with current methods, it may be not be easy to understand the progress situation clearly and quickly. To overcome such inefficiencies, this research focuses on exploring application of unsorted daily progress photograph logs - available on any construction site - as well as IFC-based 4D models for progress monitoring. Our approach is based on computing, from the images themselves, the photographer's locations and orientations, along with a sparse 3D geometric representation of the as-built scene using daily progress photographs and superimposition of the reconstructed scene over the as-planned 4D model. Within such an environment, progress photographs are registered in the virtual as-planned environment, allowing a large unstructured collection of daily construction images to be interactively explored. In addition, sparse reconstructed scenes superimposed over 4D models allow site images to be geo-registered with the as-planned components and consequently, a location-based image processing technique to be implemented and progress data to be extracted automatically. The result of progress comparison study between as-planned and as-built performances can subsequently be visualized in the D4AR - 4D Augmented Reality - environment using a traffic light metaphor. In such an environment, project participants would be able to: 1) use the 4D as-planned model as a baseline for progress monitoring, compare it to daily construction photographs and study workspace logistics; 2) interactively and remotely explore registered construction photographs in a 3D environment; 3) analyze registered images and quantify as-built progress; 4) measure discrepancies between as-planned and as-built performances; and 5) visually represent progress discrepancies through superimposition of 4D as-planned models over progress photographs, make control decisions and effectively communicate those with project participants. We present our preliminary results on two ongoing construction projects and discuss implementation, perceived benefits and future potential enhancement of this new technology in construction, in all fronts of automatic data collection, processing and communication.

  • PDF

Development of real-time defect detection technology for water distribution and sewerage networks (시나리오 기반 상·하수도 관로의 실시간 결함검출 기술 개발)

  • Park, Dong, Chae;Choi, Young Hwan
    • Journal of Korea Water Resources Association
    • /
    • v.55 no.spc1
    • /
    • pp.1177-1185
    • /
    • 2022
  • The water and sewage system is an infrastructure that provides safe and clean water to people. In particular, since the water and sewage pipelines are buried underground, it is very difficult to detect system defects. For this reason, the diagnosis of pipelines is limited to post-defect detection, such as system diagnosis based on the images taken after taking pictures and videos with cameras and drones inside the pipelines. Therefore, real-time detection technology of pipelines is required. Recently, pipeline diagnosis technology using advanced equipment and artificial intelligence techniques is being developed, but AI-based defect detection technology requires a variety of learning data because the types and numbers of defect data affect the detection performance. Therefore, in this study, various defect scenarios are implemented using 3D printing model to improve the detection performance when detecting defects in pipelines. Afterwards, the collected images are performed to pre-processing such as classification according to the degree of risk and labeling of objects, and real-time defect detection is performed. The proposed technique can provide real-time feedback in the pipeline defect detection process, and it would be minimizing the possibility of missing diagnoses and improve the existing water and sewerage pipe diagnosis processing capability.

Automatic Detection of Type II Solar Radio Burst by Using 1-D Convolution Neutral Network

  • Kyung-Suk Cho;Junyoung Kim;Rok-Soon Kim;Eunsu Park;Yuki Kubo;Kazumasa Iwai
    • Journal of The Korean Astronomical Society
    • /
    • v.56 no.2
    • /
    • pp.213-224
    • /
    • 2023
  • Type II solar radio bursts show frequency drifts from high to low over time. They have been known as a signature of coronal shock associated with Coronal Mass Ejections (CMEs) and/or flares, which cause an abrupt change in the space environment near the Earth (space weather). Therefore, early detection of type II bursts is important for forecasting of space weather. In this study, we develop a deep-learning (DL) model for the automatic detection of type II bursts. For this purpose, we adopted a 1-D Convolution Neutral Network (CNN) as it is well-suited for processing spatiotemporal information within the applied data set. We utilized a total of 286 radio burst spectrum images obtained by Hiraiso Radio Spectrograph (HiRAS) from 1991 and 2012, along with 231 spectrum images without the bursts from 2009 to 2015, to recognizes type II bursts. The burst types were labeled manually according to their spectra features in an answer table. Subsequently, we applied the 1-D CNN technique to the spectrum images using two filter windows with different size along time axis. To develop the DL model, we randomly selected 412 spectrum images (80%) for training and validation. The train history shows that both train and validation losses drop rapidly, while train and validation accuracies increased within approximately 100 epoches. For evaluation of the model's performance, we used 105 test images (20%) and employed a contingence table. It is found that false alarm ratio (FAR) and critical success index (CSI) were 0.14 and 0.83, respectively. Furthermore, we confirmed above result by adopting five-fold cross-validation method, in which we re-sampled five groups randomly. The estimated mean FAR and CSI of the five groups were 0.05 and 0.87, respectively. For experimental purposes, we applied our proposed model to 85 HiRAS type II radio bursts listed in the NGDC catalogue from 2009 to 2016 and 184 quiet (no bursts) spectrum images before and after the type II bursts. As a result, our model successfully detected 79 events (93%) of type II events. This results demonstrates, for the first time, that the 1-D CNN algorithm is useful for detecting type II bursts.

Evaluation of the correlation between the muscle fat ratio of pork belly and pork shoulder butt using computed tomography scan

  • Sheena Kim;Jeongin Choi;Eun Sol Kim;Gi Beom Keum;Hyunok Doo;Jinok Kwak;Sumin Ryu;Yejin Choi;Sriniwas Pandey;Na Rae Lee;Juyoun Kang;Yujung Lee;Dongjun Kim;Kuk-Hwan Seol;Sun Moon Kang;In-Seon Bae;Soo-Hyun Cho;Hyo Jung Kwon;Samooel Jung;Youngwon Lee;Hyeun Bum Kim
    • Korean Journal of Agricultural Science
    • /
    • v.50 no.4
    • /
    • pp.809-815
    • /
    • 2023
  • This study was conducted to find out the correlation between meat quality and muscle fat ratio in pork part meat (pork belly and shoulder butt) using CT (computed tomography) imaging technique. After 24 hours from slaughter, pork loin and belly were individually prepared from the left semiconductors of 26 pigs for CT measurement. The image obtained from CT scans was checked through the picture archiving and communications system (PACS). The volume of muscle and fat in the pork belly and shoulder butt of cross-sectional images taken by CT was estimated using Vitrea workstation version 7. This assemblage was further processed through Vitrea post-processing software to automatically calculate the volumes (Fig. 1). The volumes were measured in milliliters (mL). In addition to volume calculation, a three-dimensional reconstruction of the organ under consideration was generated. Pearson's correlation coefficient was analyzed to evaluate the relationship by region (pork belly, pork shoulder butt), and statistical processing was performed using GraphPad Prism 8. The muscle-fat ratios of pork belly taken by CT was 1 : 0.86, while that of pork shoulder butt was 1 : 0.37. As a result of CT analysis of the correlation coefficient between pork belly and shoulder butt compared to the muscle-fat ratio, the correlation coefficient was 0.5679 (R2 = 0.3295, p < 0.01). CT imaging provided very good estimates of muscle contents in cuts and in the whole carcass.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems (방출단층촬영 시스템을 위한 GPU 기반 반복적 기댓값 최대화 재구성 알고리즘 연구)

  • Ha, Woo-Seok;Kim, Soo-Mee;Park, Min-Jae;Lee, Dong-Soo;Lee, Jae-Sung
    • Nuclear Medicine and Molecular Imaging
    • /
    • v.43 no.5
    • /
    • pp.459-467
    • /
    • 2009
  • Purpose: The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Materials and Methods: Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. Results: The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 see, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 see, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. Conclusion: The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries.

Principal component analysis in C[11]-PIB imaging (주성분분석을 이용한 C[11]-PIB imaging 영상분석)

  • Kim, Nambeom;Shin, Gwi Soon;Ahn, Sung Min
    • The Korean Journal of Nuclear Medicine Technology
    • /
    • v.19 no.1
    • /
    • pp.12-16
    • /
    • 2015
  • Purpose Principal component analysis (PCA) is a method often used in the neuroimagre analysis as a multivariate analysis technique for describing the structure of high dimensional correlation as the structure of lower dimensional space. PCA is a statistical procedure that uses an orthogonal transformation to convert a set of observations of correlated variables into a set of values of linearly independent variables called principal components. In this study, in order to investigate the usefulness of PCA in the brain PET image analysis, we tried to analyze C[11]-PIB PET image as a representative case. Materials and Methods Nineteen subjects were included in this study (normal = 9, AD/MCI = 10). For C[11]-PIB, PET scan were acquired for 20 min starting 40 min after intravenous injection of 9.6 MBq/kg C[11]-PIB. All emission recordings were acquired with the Biograph 6 Hi-Rez (Siemens-CTI, Knoxville, TN) in three-dimensional acquisition mode. Transmission map for attenuation-correction was acquired using the CT emission scans (130 kVp, 240 mA). Standardized uptake values (SUVs) of C[11]-PIB calculated from PET/CT. In normal subjects, 3T MRI T1-weighted images were obtained to create a C[11]-PIB template. Spatial normalization and smoothing were conducted as a pre-processing for PCA using SPM8 and PCA was conducted using Matlab2012b. Results Through the PCA, we obtained linearly uncorrelated independent principal component images. Principal component images obtained through the PCA can simplify the variation of whole C[11]-PIB images into several principal components including the variation of neocortex and white matter and the variation of deep brain structure such as pons. Conclusion PCA is useful to analyze and extract the main pattern of C[11]-PIB image. PCA, as a method of multivariate analysis, might be useful for pattern recognition of neuroimages such as FDG-PET or fMRI as well as C[11]-PIB image.

  • PDF