• Title/Summary/Keyword: Weighted Function

Search Result 748, Processing Time 0.025 seconds

Pharmacological Functional Magnetic Resonance Imaging of Cloropidol on Motor Task (운동과제에 대한 클로피도그렐의 약리적 뇌자기공명영상)

  • Chang, Yong-Min
    • Investigative Magnetic Resonance Imaging
    • /
    • v.16 no.2
    • /
    • pp.136-141
    • /
    • 2012
  • Purpose : To investigate the pharmacologic modulation of motor task-dependent physiologic responses by antiplatelet agent, clopidogrel, during hand motor tasks in healthy subjects. Materials and Methods: Ten healthy, right-handed subjects underwent three functional magnetic resonance (fMRI) sessions: one before drug administration, one after high dose drug administration and one after reaching drug steady state. For the motor task fMRI, finger flexion-extension movements were performed. Blood oxygenation level dependent (BOLD) contrast was collected for each subject using a 3.0 T VHi (GE Healthcare, Milwaukee, USA) scanner. $T2^*$-weighted echo planar imaging was used for fMRI acquisition. The fMRI data processing and statistical analyses were carried out using SPM2. Results: Second-level analysis revealed significant increases in the extent of activation in the contralateral motor cortex including primary motor area (M1) after drug administration. The number of activated voxels in motor cortex was 173 without drug administration and the number increased to 1049 for high dose condition and 673 for steady-state condition respectively. However, there was no significant difference in the magnitude of BOLD signal change in terms of peak T value. Conclusion: The current results suggest that cerebral motor activity can be modulated by clopidogrel in healthy subjects and that fMRI is highly senstive to evidence such changes.

Assessment of uncertainty associated with parameter of gumbel probability density function in rainfall frequency analysis (강우빈도해석에서 Bayesian 기법을 이용한 Gumbel 확률분포 매개변수의 불확실성 평가)

  • Moon, Jang-Won;Moon, Young-Il;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.49 no.5
    • /
    • pp.411-422
    • /
    • 2016
  • Rainfall-runoff modeling in conjunction with rainfall frequency analysis has been widely used for estimating design floods in South Korea. However, uncertainties associated with underlying distribution and sampling error have not been properly addressed. This study applied a Bayesian method to quantify the uncertainties in the rainfall frequency analysis along with Gumbel distribution. For a purpose of comparison, a probability weighted moment (PWM) was employed to estimate confidence interval. The uncertainties associated with design rainfalls were quantitatively assessed using both Bayesian and PWM methods. The results showed that the uncertainty ranges with PWM are larger than those with Bayesian approach. In addition, the Bayesian approach was able to effectively represent asymmetric feature of underlying distribution; whereas the PWM resulted in symmetric confidence interval due to the normal approximation. The use of long period data provided better results leading to the reduction of uncertainty in both methods, and the Bayesian approach showed better performance in terms of the reduction of the uncertainty.

Comparative Hepatotoxicity Assessment of Cadmium and Nickel with Isolated Perfused Rat Liver(IPRL) (적출간 관류법을 이용한 카드뮴과 니켈의 간독성 비교)

  • Cha, Bong-Suk;Chang, Sei-Jin;Lee, Jung-Woo;Wang, Seung-Jun
    • Journal of Preventive Medicine and Public Health
    • /
    • v.33 no.1
    • /
    • pp.117-124
    • /
    • 2000
  • Objectives : It is the objective of this study to compare hepatotoxicity of nickel chloride and cadmium chloride with each other through IPRL(Isolated Perfused Rat Liver) method. Methods : Biochemical indicator of hepatic function such as AST(aspartate aminotransferase), ALT(alanine aminotransferase), LDH(lactate dehydrogenase) and perfusion flow rate were used as the indicator of hepatotoxicity. Oxygen consumption rate were used as vability indicator. $300({\pm}50)g$ - weighted rats were allocated randomly to each group($0{\mu}M,\;50{\mu}M,\;200{\mu}M\;NiCl_2\;and\;CdCl_2$ exposure) by 5, totally 25. After Krebs-Ringer bicarbonate butler solution flowed into the penal vein and passed the liver cell, it flowed out of vena cava. Liver was administered with each $NiCl_2\;and\;CdCl_2$ of each concentration and observed with buffer solution sampling time. Butler which got out of liver was sampled and then biochemical indicator of hepatotoxicity was measured. Results : AST, ALT, and LDH in buffer increased with sampling time much more in $CdCl_2$ exposure group than $NiCl_2$ exposure group in both 50 and $200{\mu}M$ and statistical significance w3s verified with 2-way repeated ANOVA. Viability was decreased more and more in all substances during passed time. Conclusions : It is inferred that $CdCl_2$ has stronger hepatotoxicity than $NiCl_2$. IPRL method would be used widely for acute hepatotoxicity when considerating the benefit of it.

  • PDF

Document classification using a deep neural network in text mining (텍스트 마이닝에서 심층 신경망을 이용한 문서 분류)

  • Lee, Bo-Hui;Lee, Su-Jin;Choi, Yong-Seok
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.615-625
    • /
    • 2020
  • The document-term frequency matrix is a term extracted from documents in which the group information exists in text mining. In this study, we generated the document-term frequency matrix for document classification according to research field. We applied the traditional term weighting function term frequency-inverse document frequency (TF-IDF) to the generated document-term frequency matrix. In addition, we applied term frequency-inverse gravity moment (TF-IGM). We also generated a document-keyword weighted matrix by extracting keywords to improve the document classification accuracy. Based on the keywords matrix extracted, we classify documents using a deep neural network. In order to find the optimal model in the deep neural network, the accuracy of document classification was verified by changing the number of hidden layers and hidden nodes. Consequently, the model with eight hidden layers showed the highest accuracy and all TF-IGM document classification accuracy (according to parameter changes) were higher than TF-IDF. In addition, the deep neural network was confirmed to have better accuracy than the support vector machine. Therefore, we propose a method to apply TF-IGM and a deep neural network in the document classification.

Prediction of Potential Habitat of Japanese evergreen oak (Quercus acuta Thunb.) Considering Dispersal Ability Under Climate Change (분산 능력을 고려한 기후변화에 따른 붉가시나무의 잠재서식지 분포변화 예측연구)

  • Shin, Man-Seok;Seo, Changwan;Park, Seon-Uk;Hong, Seung-Bum;Kim, Jin-Yong;Jeon, Ja-Young;Lee, Myungwoo
    • Journal of Environmental Impact Assessment
    • /
    • v.27 no.3
    • /
    • pp.291-306
    • /
    • 2018
  • This study was designed to predict potential habitat of Japanese evergreen oak (Quercus acuta Thunb.) in Korean Peninsula considering its dispersal ability under climate change. We used a species distribution model (SDM) based on the current species distribution and climatic variables. To reduce the uncertainty of the SDM, we applied nine single-model algorithms and the pre-evaluation weighted ensemble method. Two representative concentration pathways (RCP 4.5 and 8.5) were used to simulate the distribution of Japanese evergreen oak in 2050 and 2070. The final future potential habitat was determined by considering whether it will be dispersed from the current habitat. The dispersal ability was determined using the Migclim by applying three coefficient values (${\theta}=-0.005$, ${\theta}=-0.001$ and ${\theta}=-0.0005$) to the dispersal-limited function and unlimited case. All the projections revealed potential habitat of Japanese evergreen oak will be increased in Korean Peninsula except the RCP 4.5 in 2050. However, the future potential habitat of Japanese evergreen oak was found to be limited considering the dispersal ability of this species. Therefore, estimation of dispersal ability is required to understand the effect of climate change and habitat distribution of the species.

Seasonal Trend of Elevation Effect on Daily Air Temperature in Korea (일별 국지기온 결정에 미치는 관측지점 표고영향의 계절변동)

  • 윤진일;최재연;안재훈
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.3 no.2
    • /
    • pp.96-104
    • /
    • 2001
  • Usage of ecosystem models has been extended to landscape scales for understanding the effects of environmental factors on natural and agro-ecosystems and for serving as their management decision tools. Accurate prediction of spatial variation in daily temperature is required for most ecosystem models to be applied to landscape scales. There are relatively few empirical evaluations of landscape-scale temperature prediction techniques in mountainous terrain such as Korean Peninsula. We derived a periodic function of seasonal lapse rate fluctuation from analysis of elevation effects on daily temperatures. Observed daily maximum and minimum temperature data at 63 standard stations in 1999 were regressed to the latitude, longitude, distance from the nearest coastline and altitude of the stations, and the optimum models with $r^2$ of 0.65 and above were selected. Partial regression coefficients for the altitude variable were plotted against day of year, and a numerical formula was determined for simulating the seasonal trend of daily lapse rate, i.e., partial regression coefficients. The formula in conjunction with an inverse distance weighted interpolation scheme was applied to predict daily temperatures at 267 sites, where observation data are available, on randomly selected dates for winter, spring and summer in 2000. The estimation errors were smaller and more consistent than the inverse distance weighting plus mean annual lapse rate scheme. We conclude that this method is simple and accurate enough to be used as an operational temperature interpolation scheme at landscape scale in Korea and should be applicable to elsewhere.

  • PDF

Study on the Dextran and the Inner Structure of Jeung-Pyun (Korea Rice Cake) on Adding Oligosaccharide (올리고당 첨가 증편 발효 중 Dextran 형성과 증편의 내부구조에 관한 연구)

  • 이은아;우경자
    • Journal of the East Asian Society of Dietary Life
    • /
    • v.12 no.1
    • /
    • pp.38-46
    • /
    • 2002
  • This study was carried out in order to investigate dextran formation and internal structure during fermentation of the oligosaccharide Jeung-Pyun. The dextran and sugar reducing contents of Jeung-Pyun batter and the specific volume and the internal structure of Jeung-Pyun were analyzed as a function of fermentation time. The specific volume of Jeung-Pyun peaked at the 7th hour of fermentation. The dextran content of Jeung-Pyun batters peaked at the 7~13th hour of fermentation, and Fructooligosaccharide Jeung-Pyun had the least peak value. Reducing sugar content of Jeung-Pyun batters slowly decreased as fermentation progressed. From the air pore size and distribution of Jeung-Pyun observed by SEM, the sucrose Jeung-Pyun fermented for 3~7 hours and oligosaccharide one fermented for 7 hours were judged as the best. It was concluded that dextran may be formed by fermentation of oligosaccharides as well as sucrose and dextran has a significant role on the volume expansion of Jeung-Pyun.

  • PDF

The Study on Speaker Change Verification Using SNR based weighted KL distance (SNR 기반 가중 KL 거리를 활용한 화자 변화 검증에 관한 연구)

  • Cho, Joon-Beom;Lee, Ji-eun;Lee, Kyong-Rok
    • Journal of Convergence for Information Technology
    • /
    • v.7 no.6
    • /
    • pp.159-166
    • /
    • 2017
  • In this paper, we have experimented to improve the verification performance of speaker change detection on broadcast news. It is to enhance the input noisy speech and to apply the KL distance $D_s$ using the SNR-based weighting function $w_m$. The basic experimental system is the verification system of speaker change using GMM-UBM based KL distance D(Experiment 0). Experiment 1 applies the input noisy speech enhancement using MMSE Log-STSA. Experiment 2 applies the new KL distance $D_s$ to the system of Experiment 1. Experiments were conducted under the condition of 0% MDR in order to prevent missing information of speaker change. The FAR of Experiment 0 was 71.5%. The FAR of Experiment 1 was 67.3%, which was 4.2% higher than that of Experiment 0. The FAR of experiment 2 was 60.7%, which was 10.8% higher than that of experiment 0.

A Spatial Interpolation Model for Daily Minimum Temperature over Mountainous Regions (산악지대의 일 최저기온 공간내삽모형)

  • Yun Jin-Il;Choi Jae-Yeon;Yoon Young-Kwan;Chung Uran
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.2 no.4
    • /
    • pp.175-182
    • /
    • 2000
  • Spatial interpolation of daily temperature forecasts and observations issued by public weather services is frequently required to make them applicable to agricultural activities and modeling tasks. In contrast to the long term averages like monthly normals, terrain effects are not considered in most spatial interpolations for short term temperatures. This may cause erroneous results in mountainous regions where the observation network hardly covers full features of the complicated terrain. We developed a spatial interpolation model for daily minimum temperature which combines inverse distance squared weighting and elevation difference correction. This model uses a time dependent function for 'mountain slope lapse rate', which can be derived from regression analyses of the station observations with respect to the geographical and topographical features of the surroundings including the station elevation. We applied this model to interpolation of daily minimum temperature over the mountainous Korean Peninsula using 63 standard weather station data. For the first step, a primitive temperature surface was interpolated by inverse distance squared weighting of the 63 point data. Next, a virtual elevation surface was reconstructed by spatially interpolating the 63 station elevation data and subtracted from the elevation surface of a digital elevation model with 1 km grid spacing to obtain the elevation difference at each grid cell. Final estimates of daily minimum temperature at all the grid cells were obtained by applying the calculated daily lapse rate to the elevation difference and adjusting the inverse distance weighted estimates. Independent, measured data sets from 267 automated weather station locations were used to calculate the estimation errors on 12 dates, randomly selected one for each month in 1999. Analysis of 3 terms of estimation errors (mean error, mean absolute error, and root mean squared error) indicates a substantial improvement over the inverse distance squared weighting.

  • PDF

Doubly-robust Q-estimation in observational studies with high-dimensional covariates (고차원 관측자료에서의 Q-학습 모형에 대한 이중강건성 연구)

  • Lee, Hyobeen;Kim, Yeji;Cho, Hyungjun;Choi, Sangbum
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.309-327
    • /
    • 2021
  • Dynamic treatment regimes (DTRs) are decision-making rules designed to provide personalized treatment to individuals in multi-stage randomized trials. Unlike classical methods, in which all individuals are prescribed the same type of treatment, DTRs prescribe patient-tailored treatments which take into account individual characteristics that may change over time. The Q-learning method, one of regression-based algorithms to figure out optimal treatment rules, becomes more popular as it can be easily implemented. However, the performance of the Q-learning algorithm heavily relies on the correct specification of the Q-function for response, especially in observational studies. In this article, we examine a number of double-robust weighted least-squares estimating methods for Q-learning in high-dimensional settings, where treatment models for propensity score and penalization for sparse estimation are also investigated. We further consider flexible ensemble machine learning methods for the treatment model to achieve double-robustness, so that optimal decision rule can be correctly estimated as long as at least one of the outcome model or treatment model is correct. Extensive simulation studies show that the proposed methods work well with practical sample sizes. The practical utility of the proposed methods is proven with real data example.