• Title/Summary/Keyword: Matrix image

Search Result 1,012, Processing Time 0.033 seconds

Identification of Genes Connected with the Sensitivity to 5-FU and Cisplatin in Squamous Cell Carcinoma Cell Lines (편평세포암 세포주에서 5-FU와 Cisplatin에의 감수성과 관련된 유전자의 동정)

  • Choi, Na-Young;Kim, Ok-Joon;Lee, Geum-Sug;Kim, Byung-Gook;Kim, Jae-Hyeong;Jang, Youn-Young;Lim, Won-Bong;Chong, Min-A;Choi, Hong-Ran
    • Journal of Oral Medicine and Pain
    • /
    • v.30 no.3
    • /
    • pp.287-300
    • /
    • 2005
  • Squamous cell carcinoma (SCC) in head and neck show a variability in the response to chemotherapy, even when it present with similar histological tumor type, grade, and clinical stage. The purpose of present study it to identify predictive bio-marker for the sensitivity or resistance to conventional chemotherapeutic agents, 5-fluorouracil (5-FU) and Cisplatin Oral cancer cell lines were used in present study. MTT assay was performed to evaluate the sensitivity and/or resistance to 5-FU and Cisplatin. And RT-PCR was carried out for evaluation of the mRNA expressions of various genes associated with mutation, inflammation (COX pathway), cell cycle, senescence and extracellular matrix (ECM). The molecules which are correlated with the sensitivity to 5-FU are XPA, XPC, OGG, APEX, COX-2, PPAR, Cyclin E, Cyclin B1, CDC2, hTERT, hTR, TIMP-3, TIMP-4 and HSP47. And the molecules are correlated with the sensitivity to Cisplatin are COX-1, iNOS, eNOS, PCNA, collagen 1 and MMP-9. Taken together, when choosing the appropriate chemotherpeutic agents for patients, considering the molecules which are correlated or reversely correlated is helpful to choose the resonable agents for cancer patients.

Improvement and Validation of Convective Rainfall Rate Retrieved from Visible and Infrared Image Bands of the COMS Satellite (COMS 위성의 가시 및 적외 영상 채널로부터 복원된 대류운의 강우강도 향상과 검증)

  • Moon, Yun Seob;Lee, Kangyeol
    • Journal of the Korean earth science society
    • /
    • v.37 no.7
    • /
    • pp.420-433
    • /
    • 2016
  • The purpose of this study is to improve the calibration matrixes of 2-D and 3-D convective rainfall rates (CRR) using the brightness temperature of the infrared $10.8{\mu}m$ channel (IR), the difference of brightness temperatures between infrared $10.8{\mu}m$ and vapor $6.7{\mu}m$ channels (IR-WV), and the normalized reflectance of the visible channel (VIS) from the COMS satellite and rainfall rate from the weather radar for the period of 75 rainy days from April 22, 2011 to October 22, 2011 in Korea. Especially, the rainfall rate data of the weather radar are used to validate the new 2-D and 3-DCRR calibration matrixes suitable for the Korean peninsula for the period of 24 rainy days in 2011. The 2D and 3D calibration matrixes provide the basic and maximum CRR values ($mm\;h^{-1}$) by multiplying the rain probability matrix, which is calculated by using the number of rainy and no-rainy pixels with associated 2-D (IR, IR-WV) and 3-D (IR, IR-WV, VIS) matrixes, by the mean and maximum rainfall rate matrixes, respectively, which is calculated by dividing the accumulated rainfall rate by the number of rainy pixels and by the product of the maximum rain rate for the calibration period by the number of rain occurrences. Finally, new 2-D and 3-D CRR calibration matrixes are obtained experimentally from the regression analysis of both basic and maximum rainfall rate matrixes. As a result, an area of rainfall rate more than 10 mm/h is magnified in the new ones as well as CRR is shown in lower class ranges in matrixes between IR brightness temperature and IR-WV brightness temperature difference than the existing ones. Accuracy and categorical statistics are computed for the data of CRR events occurred during the given period. The mean error (ME), mean absolute error (MAE), and root mean squire error (RMSE) in new 2-D and 3-D CRR calibrations led to smaller than in the existing ones, where false alarm ratio had decreased, probability of detection had increased a bit, and critical success index scores had improved. To take into account the strong rainfall rate in the weather events such as thunderstorms and typhoon, a moisture correction factor is corrected. This factor is defined as the product of the total precipitable waterby the relative humidity (PW RH), a mean value between surface and 500 hPa level, obtained from a numerical model or the COMS retrieval data. In this study, when the IR cloud top brightness temperature is lower than 210 K and the relative humidity is greater than 40%, the moisture correction factor is empirically scaled from 1.0 to 2.0 basing on PW RH values. Consequently, in applying to this factor in new 2D and 2D CRR calibrations, the ME, MAE, and RMSE are smaller than the new ones.

Airborne Hyperspectral Imagery availability to estimate inland water quality parameter (수질 매개변수 추정에 있어서 항공 초분광영상의 가용성 고찰)

  • Kim, Tae-Woo;Shin, Han-Sup;Suh, Yong-Cheol
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.61-73
    • /
    • 2014
  • This study reviewed an application of water quality estimation using an Airborne Hyperspectral Imagery (A-HSI) and tested a part of Han River water quality (especially suspended solid) estimation with available in-situ data. The estimation of water quality was processed two methods. One is using observation data as downwelling radiance to water surface and as scattering and reflectance into water body. Other is linear regression analysis with water quality in-situ measurement and upwelling data as at-sensor radiance (or reflectance). Both methods drive meaningful results of RS estimation. However it has more effects on the auxiliary dataset as water quality in-situ measurement and water body scattering measurement. The test processed a part of Han River located Paldang-dam downstream. We applied linear regression analysis with AISA eagle hyperspectral sensor data and water quality measurement in-situ data. The result of linear regression for a meaningful band combination shows $-24.847+0.013L_{560}$ as 560 nm in radiance (L) with 0.985 R-square. To comparison with Multispectral Imagery (MSI) case, we make simulated Landsat TM by spectral resampling. The regression using MSI shows -55.932 + 33.881 (TM1/TM3) as radiance with 0.968 R-square. Suspended Solid (SS) concentration was about 3.75 mg/l at in-situ data and estimated SS concentration by A-HIS was about 3.65 mg/l, and about 5.85mg/l with MSI with same location. It shows overestimation trends case of estimating using MSI. In order to upgrade value for practical use and to estimate more precisely, it needs that minimizing sun glint effect into whole image, constructing elaborate flight plan considering solar altitude angle, and making good pre-processing and calibration system. We found some limitations and restrictions such as precise atmospheric correction, sample count of water quality measurement, retrieve spectral bands into A-HSI, adequate linear regression model selection, and quantitative calibration/validation method through the literature review and test adopted general methods.

Alteration Analysis of Normal Human Brain Metabolites with Variation of SENSE and NEX in 3T Multi Voxel Spectroscopy (3T Multi Voxel Spectroscopy에서 SENSE와 NEX 변화에 따른 정상인 뇌 대사물질 변화 분석)

  • Seong, Yeol-Hun;Rhim, Jae-Dong;Lee, Jae-Hyun;Cho, Sung-Bong;Woo, Dong-Chul;Choe, Bo-Young
    • Progress in Medical Physics
    • /
    • v.19 no.4
    • /
    • pp.256-262
    • /
    • 2008
  • To evaluate the metabolic changes in normal adult brains due to alterations SENSE and NEX (number of excitation) by multi voxel MR Spectroscopy at 3.0 Tesla. The study group was composed of normal volunteers (5 men and 8 women) with a mean ($\pm$ standard deviation) age of 41 (${\pm}11.65$). Their ages ranged from 28 to 61 years. MR Spectroscopy was performed with a 3.0T Achieva Release Version 2.0 (Philips Medical System-Netherlands). The 8 channel head coil was employed for MRS acquisition. The 13 volunteers underwent multi voxel spectroscopy (MVS) and single voxel spectroscopy (SVS) on the thalamus area with normally gray matter. Spectral parameters were as follows: 15 mm of thickness; 230 mm of FOV (field of view); 2000 msecs of repetition time (TR); 288 msecs of echo time (TE); $110{\times}110$ mm of VOI (view of interest); $15{\times}15{\times}15$ mm of voxel size. Multi voxel spectral parameters were made using specially in alteration of SENSE factor (1~3) and 1~2 of NEX. All MRS data were processed by the jMRUI 3.0 Version. There was no significant difference in NAA/Cr and Cho/Cr ratio between MVS and SVS likewise the previous results by Ross and coworkers in 1994. In addition, despite the alterations of SENSE factor and NEX in MVS, the metabolite ratios were not changed (F-value : 1.37, D.F : 3, P-value : 0.262). However, line-width of NAA peak in MVS was 3 times bigger than that in SVS. In the present study, we demonstrated that the alterations of SENSE factor and NEX were not critically affective to the result of metabolic ratios in the normal brain tissue.

  • PDF

The Accuracy Evaluation according to Dose Delivery Interruption and Restart for Volumetric Modulated Arc Therapy (용적변조회전 방사선치료에서 선량전달의 중단 및 재시작에 따른 정확성 평가)

  • Lee, Dong Hyung;Bae, Sun Myung;Kwak, Jung Won;Kang, Tae Young;Back, Geum Mun
    • The Journal of Korean Society for Radiation Therapy
    • /
    • v.25 no.1
    • /
    • pp.77-85
    • /
    • 2013
  • Purpose: The accurate movement of gantry rotation, collimator and correct application of dose rate are very important to approach the successful performance of Volumetric Modulated Arc Therapy (VMAT), because it is tightly interlocked with a complex treatment plan. The interruption and restart of dose delivery, however, are able to occur on treatment by various factors of a treatment machine and treatment plan. If unexpected problems of a treat machine or a patient interrupt the VMAT, the movement of treatment machine for delivering the remaining dose will be restarted at the start point. In this investigation, We would like to know the effect of interruptions and restart regarding dose delivery at VMAT. Materials and Methods: Treatment plans of 10 patients who had been treated at our center were used to measure and compare the dose distribution of each VMAT after converting to a form of digital image and communications in Medicine (DICOM) with treatment planning system (Eclipse V 10.0, Varian, USA). We selected the 6 MV photon energy of Trilogy (Varian, USA) and used OmniPro I'mRT system (V 1.7b, IBA dosimetry, Germany) to analyze the data that were acquired through this measurement with two types of interruptions four times for each case. The door interlock and the beam-off were used to stop and then to restart the dose delivery of VMAT. The gamma index in OmniPro I'mRT system and T-test in Microsoft Excel 2007 were used to evaluate the result of this investigation. Results: The deviations of average gamma index in cases with door interlock, beam-off and without interruption on VMAT are 0.141, 0.128 and 0.1. The standard deviations of acquired gamma values are 0.099, 0.091, 0.071 and The maximum gamma value in each case is 0.413, 0.379, 0.286, respectively. This analysis has a 95-percent confidence level and the P-value of T-test is under 0.05. Gamma pass rate (3%, 3 mm) is acceptable in all of measurements. Conclusion: As a result, We could make sure that the interruption of this investgation are not enough to seriously affect dose delivery of VMAT by analyzing the measured data. But this investigation did not reflect all cases about interruptions and errors regarding the movement of a gantry rotation, collimator and patient So, We should continuously maintain a treatment machine and program to deliver the accurate dose when we perform the VMAT for the many kinds of cancer patients.

  • PDF

Validation of Extreme Rainfall Estimation in an Urban Area derived from Satellite Data : A Case Study on the Heavy Rainfall Event in July, 2011 (위성 자료를 이용한 도시지역 극치강우 모니터링: 2011년 7월 집중호우를 중심으로)

  • Yoon, Sun-Kwon;Park, Kyung-Won;Kim, Jong Pil;Jung, Il-Won
    • Journal of Korea Water Resources Association
    • /
    • v.47 no.4
    • /
    • pp.371-384
    • /
    • 2014
  • This study developed a new algorithm of extreme rainfall extraction based on the Communication, Ocean and Meteorological Satellite (COMS) and the Tropical Rainfall Measurement Mission (TRMM) Satellite image data and evaluated its applicability for the heavy rainfall event in July-2011 in Seoul, South Korea. The power-series-regression-based Z-R relationship was employed for taking into account for empirical relationships between TRMM/PR, TRMM/VIRS, COMS, and Automatic Weather System(AWS) at each elevation. The estimated Z-R relationship ($Z=303R^{0.72}$) agreed well with observation from AWS (correlation coefficient=0.57). The estimated 10-minute rainfall intensities from the COMS satellite using the Z-R relationship generated underestimated rainfall intensities. For a small rainfall event the Z-R relationship tended to overestimated rainfall intensities. However, the overall patterns of estimated rainfall were very comparable with the observed data. The correlation coefficients and the Root Mean Square Error (RMSE) of 10-minute rainfall series from COMS and AWS gave 0.517, and 3.146, respectively. In addition, the averaged error value of the spatial correlation matrix ranged from -0.530 to -0.228, indicating negative correlation. To reduce the error by extreme rainfall estimation using satellite datasets it is required to take into more extreme factors and improve the algorithm through further study. This study showed the potential utility of multi-geostationary satellite data for building up sub-daily rainfall and establishing the real-time flood alert system in ungauged watersheds.

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Time Resolution Improvement of MRI Temperature Monitoring Using Keyhole Method (Keyhole 방법을 이용한 MR 온도감시영상의 시간해상도 향상기법)

  • Han, Yong-Hee;Kim, Tae-Hyung;Chun, Song-I;Kim, Dong-Hyeuk;Lee, Kwang-Sig;Eun, Choong-Ki;Jun, Jae-Ryang;Mun, Chi-Woong
    • Investigative Magnetic Resonance Imaging
    • /
    • v.13 no.1
    • /
    • pp.31-39
    • /
    • 2009
  • Purpose : This study proposes the keyhole method in order to improve the time resolution of the proton resonance frequency(PRF) MR temperature monitoring technique. The values of Root Mean Square (RMS) error of measured temperature value and Signal-to-Noise Ratio(SNR) obtained from the keyhole and full phase encoded temperature images were compared. Materials and Methods : The PRF method combined with GRE sequence was used to get MR temperature images using a clinical 1.5T MR scanner. It was conducted on the tissue-mimic 2% agarose gel phantom and swine's hock tissue. A MR compatible coaxial slot antenna driven by microwave power generator at 2.45GHz was used to heat the object in the magnetic bore for 5 minutes followed by a sequential acquisition of MR raw data during 10 minutes of cooling period. The acquired raw data were transferred to PC after then the keyhole images were reconstructed by taking the central part of K-space data with 128, 64, 32 and 16 phase encoding lines while the remaining peripheral parts were taken from the 1st reference raw data. The RMS errors were compared with the 256 full encoded self-reference temperature image while the SNR values were compared with the zero filling images. Results : As phase encoding number at the center part on the keyhole temperature images decreased to 128, 64, 32 and 16, the RMS errors of the measured temperature increased to 0.538, 0.712, 0.768 and 0.845$^{\circ}C$, meanwhile SNR values were maintained as the phase encoding number of keyhole part is reduced. Conclusion : This study shows that the keyhole technique is successfully applied to temperature monitoring procedure to increases the temporal resolution by standardizing the matrix size, thus maintained the SNR values. In future, it is expected to implement the MR real time thermal imaging using keyhole method which is able to reduce the scan time with minimal thermal variations.

  • PDF

Development of Dose Planning System for Brachytherapy with High Dose Rate Using Ir-192 Source (고선량률 강내조사선원을 이용한 근접조사선량계획전산화 개발)

  • Choi Tae Jin;Yei Ji Won;Kim Jin Hee;Kim OK;Lee Ho Joon;Han Hyun Soo
    • Radiation Oncology Journal
    • /
    • v.20 no.3
    • /
    • pp.283-293
    • /
    • 2002
  • Purpose : A PC based brachytherapy planning system was developed to display dose distributions on simulation images by 2D isodose curve including the dose profiles, dose-volume histogram and 30 dose distributions. Materials and Methods : Brachytherapy dose planning software was developed especially for the Ir-192 source, which had been developed by KAERI as a substitute for the Co-60 source. The dose computation was achieved by searching for a pre-computed dose matrix which was tabulated as a function of radial and axial distance from a source. In the computation process, the effects of the tissue scattering correction factor and anisotropic dose distributions were included. The computed dose distributions were displayed in 2D film image including the profile dose, 3D isodose curves with wire frame forms and dosevolume histogram. Results : The brachytherapy dose plan was initiated by obtaining source positions on the principal plane of the source axis. The dose distributions in tissue were computed on a $200\times200\;(mm^2)$ plane on which the source axis was located at the center of the plane. The point doses along the longitudinal axis of the source were $4.5\~9.0\%$ smaller than those on the radial axis of the plane, due to the anisotropy created by the cylindrical shape of the source. When compared to manual calculation, the point doses showed $1\~5\%$ discrepancies from the benchmarking plan. The 2D dose distributions of different planes were matched to the same administered isodose level in order to analyze the shape of the optimized dose level. The accumulated dose-volume histogram, displayed as a function of the percentage volume of administered minimum dose level, was used to guide the volume analysis. Conclusion : This study evaluated the developed computerized dose planning system of brachytherapy. The dose distribution was displayed on the coronal, sagittal and axial planes with the dose histogram. The accumulated DVH and 3D dose distributions provided by the developed system may be useful tools for dose analysis in comparison with orthogonal dose planning.

Label Embedding for Improving Classification Accuracy UsingAutoEncoderwithSkip-Connections (다중 레이블 분류의 정확도 향상을 위한 스킵 연결 오토인코더 기반 레이블 임베딩 방법론)

  • Kim, Museong;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.175-197
    • /
    • 2021
  • Recently, with the development of deep learning technology, research on unstructured data analysis is being actively conducted, and it is showing remarkable results in various fields such as classification, summary, and generation. Among various text analysis fields, text classification is the most widely used technology in academia and industry. Text classification includes binary class classification with one label among two classes, multi-class classification with one label among several classes, and multi-label classification with multiple labels among several classes. In particular, multi-label classification requires a different training method from binary class classification and multi-class classification because of the characteristic of having multiple labels. In addition, since the number of labels to be predicted increases as the number of labels and classes increases, there is a limitation in that performance improvement is difficult due to an increase in prediction difficulty. To overcome these limitations, (i) compressing the initially given high-dimensional label space into a low-dimensional latent label space, (ii) after performing training to predict the compressed label, (iii) restoring the predicted label to the high-dimensional original label space, research on label embedding is being actively conducted. Typical label embedding techniques include Principal Label Space Transformation (PLST), Multi-Label Classification via Boolean Matrix Decomposition (MLC-BMaD), and Bayesian Multi-Label Compressed Sensing (BML-CS). However, since these techniques consider only the linear relationship between labels or compress the labels by random transformation, it is difficult to understand the non-linear relationship between labels, so there is a limitation in that it is not possible to create a latent label space sufficiently containing the information of the original label. Recently, there have been increasing attempts to improve performance by applying deep learning technology to label embedding. Label embedding using an autoencoder, a deep learning model that is effective for data compression and restoration, is representative. However, the traditional autoencoder-based label embedding has a limitation in that a large amount of information loss occurs when compressing a high-dimensional label space having a myriad of classes into a low-dimensional latent label space. This can be found in the gradient loss problem that occurs in the backpropagation process of learning. To solve this problem, skip connection was devised, and by adding the input of the layer to the output to prevent gradient loss during backpropagation, efficient learning is possible even when the layer is deep. Skip connection is mainly used for image feature extraction in convolutional neural networks, but studies using skip connection in autoencoder or label embedding process are still lacking. Therefore, in this study, we propose an autoencoder-based label embedding methodology in which skip connections are added to each of the encoder and decoder to form a low-dimensional latent label space that reflects the information of the high-dimensional label space well. In addition, the proposed methodology was applied to actual paper keywords to derive the high-dimensional keyword label space and the low-dimensional latent label space. Using this, we conducted an experiment to predict the compressed keyword vector existing in the latent label space from the paper abstract and to evaluate the multi-label classification by restoring the predicted keyword vector back to the original label space. As a result, the accuracy, precision, recall, and F1 score used as performance indicators showed far superior performance in multi-label classification based on the proposed methodology compared to traditional multi-label classification methods. This can be seen that the low-dimensional latent label space derived through the proposed methodology well reflected the information of the high-dimensional label space, which ultimately led to the improvement of the performance of the multi-label classification itself. In addition, the utility of the proposed methodology was identified by comparing the performance of the proposed methodology according to the domain characteristics and the number of dimensions of the latent label space.