• Title/Summary/Keyword: Phase Distribution

Search Result 3,007, Processing Time 0.03 seconds

Effect of Pt as a Promoter in Decomposition of CH4 to Hydrogen over Pt(1)-Fe(30)/MCM-41 Catalyst (Pt(1)-Fe(30)/MCM-41 촉매상에서 수소 제조를 위한 메탄의 분해 반응에서 조촉매 Pt의 효과)

  • Ho Joon Seo
    • Applied Chemistry for Engineering
    • /
    • v.34 no.6
    • /
    • pp.674-678
    • /
    • 2023
  • The effect of Pt was investigated to the catalytic methane decomposition of CH4 to H2 over Pt(1)-Fe(30)/MCM-41 and Fe(30)/MCM-41 using a fixed bed flow reactor under atmosphere. The Fe2O3 and Pt crystal phase behavior of fresh Pt(1)-Fe(30)/MCM-41 were obtained via XRD analysis. SEM, EDS analysis, and mapping were performed to show the uniformed distribution of nano particles such as Fe, Pt, Si, O on the catalyst surface. XPS results showed O2-, O- species and metal ions such as Pt0, Pt2+, Pt4+, Ft0, Fe2+, Fe3+ etc. When 1 wt% of Pt was added to Fe(30)/MCM-41, automic percentage of Fe2p increased from 13.39% to 16.14%, and Pt4f was 1.51%. The yield of hydrogen over Pt(1)-Fe(30)/MCM-41 was 3.2 times higher than Fe(30)/MCM-41. The spillover effect of H2 from Pt to Fe increased the reduction of Fe particles and moderate interaction of Fe, Pt and MCM-41 increased the uniform dispersion of fine nanoparticles on the catalyst surface, and improved hydrogen yield.

Prediction of Decompensation and Death in Advanced Chronic Liver Disease Using Deep Learning Analysis of Gadoxetic Acid-Enhanced MRI

  • Subin Heo;Seung Soo Lee;So Yeon Kim;Young-Suk Lim;Hyo Jung Park;Jee Seok Yoon;Heung-Il Suk;Yu Sub Sung;Bumwoo Park;Ji Sung Lee
    • Korean Journal of Radiology
    • /
    • v.23 no.12
    • /
    • pp.1269-1280
    • /
    • 2022
  • Objective: This study aimed to evaluate the usefulness of quantitative indices obtained from deep learning analysis of gadoxetic acid-enhanced hepatobiliary phase (HBP) MRI and their longitudinal changes in predicting decompensation and death in patients with advanced chronic liver disease (ACLD). Materials and Methods: We included patients who underwent baseline and 1-year follow-up MRI from a prospective cohort that underwent gadoxetic acid-enhanced MRI for hepatocellular carcinoma surveillance between November 2011 and August 2012 at a tertiary medical center. Baseline liver condition was categorized as non-ACLD, compensated ACLD, and decompensated ACLD. The liver-to-spleen signal intensity ratio (LS-SIR) and liver-to-spleen volume ratio (LS-VR) were automatically measured on the HBP images using a deep learning algorithm, and their percentage changes at the 1-year follow-up (ΔLS-SIR and ΔLS-VR) were calculated. The associations of the MRI indices with hepatic decompensation and a composite endpoint of liver-related death or transplantation were evaluated using a competing risk analysis with multivariable Fine and Gray regression models, including baseline parameters alone and both baseline and follow-up parameters. Results: Our study included 280 patients (153 male; mean age ± standard deviation, 57 ± 7.95 years) with non-ACLD, compensated ACLD, and decompensated ACLD in 32, 186, and 62 patients, respectively. Patients were followed for 11-117 months (median, 104 months). In patients with compensated ACLD, baseline LS-SIR (sub-distribution hazard ratio [sHR], 0.81; p = 0.034) and LS-VR (sHR, 0.71; p = 0.01) were independently associated with hepatic decompensation. The ΔLS-VR (sHR, 0.54; p = 0.002) was predictive of hepatic decompensation after adjusting for baseline variables. ΔLS-VR was an independent predictor of liver-related death or transplantation in patients with compensated ACLD (sHR, 0.46; p = 0.026) and decompensated ACLD (sHR, 0.61; p = 0.023). Conclusion: MRI indices automatically derived from the deep learning analysis of gadoxetic acid-enhanced HBP MRI can be used as prognostic markers in patients with ACLD.

Functional beamforming for high-resolution ultrasound imaging in the air with random sparse array transducer (고해상도 공기중 초음파 영상을 위한 기능성 빔형성법 적용)

  • Choon-Su Park
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.3
    • /
    • pp.361-367
    • /
    • 2024
  • Ultrasound in the air is widely used in industry as a measurement technique to prevent abnormalities in the machinery. Recently, the use of airborne ultrasound imaging techniques, which can find the location of abnormalities using an array transducers, is increasing. A beamforming method that uses the phase difference for each sensor is used to visualize the location of the ultrasonic sound source. We exploit a random sparse ultrasonic array and obtain beamforming power distribution on the source in a certain distance away from the array. Conventional beamforming methods inevitably have limited spatial resolution depending on the number of sensors used and the aperture size. A high-resolution ultrasound imaging technique was implemented by applying functional beamforming as a method to overcome the geometric constraints of the array. The functional beamforming method can be expressed as a generalized beam forming method mathematically, and has the advantage of being able to obtain high-resolution imaging by reducing main-lobe width and side lobes. As a result of observation through computer simulation, it was verified that the resolution of the ultrasonic source in the air was successfully increased by functional beamforming using the ultrasonic sparse array.

Subject-Balanced Intelligent Text Summarization Scheme (주제 균형 지능형 텍스트 요약 기법)

  • Yun, Yeoil;Ko, Eunjung;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.2
    • /
    • pp.141-166
    • /
    • 2019
  • Recently, channels like social media and SNS create enormous amount of data. In all kinds of data, portions of unstructured data which represented as text data has increased geometrically. But there are some difficulties to check all text data, so it is important to access those data rapidly and grasp key points of text. Due to needs of efficient understanding, many studies about text summarization for handling and using tremendous amounts of text data have been proposed. Especially, a lot of summarization methods using machine learning and artificial intelligence algorithms have been proposed lately to generate summary objectively and effectively which called "automatic summarization". However almost text summarization methods proposed up to date construct summary focused on frequency of contents in original documents. Those summaries have a limitation for contain small-weight subjects that mentioned less in original text. If summaries include contents with only major subject, bias occurs and it causes loss of information so that it is hard to ascertain every subject documents have. To avoid those bias, it is possible to summarize in point of balance between topics document have so all subject in document can be ascertained, but still unbalance of distribution between those subjects remains. To retain balance of subjects in summary, it is necessary to consider proportion of every subject documents originally have and also allocate the portion of subjects equally so that even sentences of minor subjects can be included in summary sufficiently. In this study, we propose "subject-balanced" text summarization method that procure balance between all subjects and minimize omission of low-frequency subjects. For subject-balanced summary, we use two concept of summary evaluation metrics "completeness" and "succinctness". Completeness is the feature that summary should include contents of original documents fully and succinctness means summary has minimum duplication with contents in itself. Proposed method has 3-phases for summarization. First phase is constructing subject term dictionaries. Topic modeling is used for calculating topic-term weight which indicates degrees that each terms are related to each topic. From derived weight, it is possible to figure out highly related terms for every topic and subjects of documents can be found from various topic composed similar meaning terms. And then, few terms are selected which represent subject well. In this method, it is called "seed terms". However, those terms are too small to explain each subject enough, so sufficient similar terms with seed terms are needed for well-constructed subject dictionary. Word2Vec is used for word expansion, finds similar terms with seed terms. Word vectors are created after Word2Vec modeling, and from those vectors, similarity between all terms can be derived by using cosine-similarity. Higher cosine similarity between two terms calculated, higher relationship between two terms defined. So terms that have high similarity values with seed terms for each subjects are selected and filtering those expanded terms subject dictionary is finally constructed. Next phase is allocating subjects to every sentences which original documents have. To grasp contents of all sentences first, frequency analysis is conducted with specific terms that subject dictionaries compose. TF-IDF weight of each subjects are calculated after frequency analysis, and it is possible to figure out how much sentences are explaining about each subjects. However, TF-IDF weight has limitation that the weight can be increased infinitely, so by normalizing TF-IDF weights for every subject sentences have, all values are changed to 0 to 1 values. Then allocating subject for every sentences with maximum TF-IDF weight between all subjects, sentence group are constructed for each subjects finally. Last phase is summary generation parts. Sen2Vec is used to figure out similarity between subject-sentences, and similarity matrix can be formed. By repetitive sentences selecting, it is possible to generate summary that include contents of original documents fully and minimize duplication in summary itself. For evaluation of proposed method, 50,000 reviews of TripAdvisor are used for constructing subject dictionaries and 23,087 reviews are used for generating summary. Also comparison between proposed method summary and frequency-based summary is performed and as a result, it is verified that summary from proposed method can retain balance of all subject more which documents originally have.

An Estimation of Concentration of Asian Dust (PM10) Using WRF-SMOKE-CMAQ (MADRID) During Springtime in the Korean Peninsula (WRF-SMOKE-CMAQ(MADRID)을 이용한 한반도 봄철 황사(PM10)의 농도 추정)

  • Moon, Yun-Seob;Lim, Yun-Kyu;Lee, Kang-Yeol
    • Journal of the Korean earth science society
    • /
    • v.32 no.3
    • /
    • pp.276-293
    • /
    • 2011
  • In this study a modeling system consisting of Weather Research and Forecasting (WRF), Sparse Matrix Operator Kernel Emissions (SMOKE), the Community Multiscale Air Quality (CMAQ) model, and the CMAQ-Model of Aerosol Dynamics, Reaction, Ionization, and Dissolution (MADRID) model has been applied to estimate enhancements of $PM_{10}$ during Asian dust events in Korea. In particular, 5 experimental formulas were applied to the WRF-SMOKE-CMAQ (MADRID) model to estimate Asian dust emissions from source locations for major Asian dust events in China and Mongolia: the US Environmental Protection Agency (EPA) model, the Goddard Global Ozone Chemistry Aerosol Radiation and Transport (GOCART) model, and the Dust Entrainment and Deposition (DEAD) model, as well as formulas by Park and In (2003), and Wang et al. (2000). According to the weather map, backward trajectory and satellite image analyses, Asian dust is generated by a strong downwind associated with the upper trough from a stagnation wave due to development of the upper jet stream, and transport of Asian dust to Korea shows up behind a surface front related to the cut-off low (known as comma type cloud) in satellite images. In the WRF-SMOKE-CMAQ modeling to estimate the PM10 concentration, Wang et al.'s experimental formula was depicted well in the temporal and spatial distribution of Asian dusts, and the GOCART model was low in mean bias errors and root mean square errors. Also, in the vertical profile analysis of Asian dusts using Wang et al's experimental formula, strong Asian dust with a concentration of more than $800\;{\mu}g/m^3$ for the period of March 31 to April 1, 2007 was transported under the boundary layer (about 1 km high), and weak Asian dust with a concentration of less than $400\;{\mu}g/m^3$ for the period of 16-17 March 2009 was transported above the boundary layer (about 1-3 km high). Furthermore, the difference between the CMAQ model and the CMAQ-MADRID model for the period of March 31 to April 1, 2007, in terms of PM10 concentration, was seen to be large in the East Asia area: the CMAQ-MADRID model showed the concentration to be about $25\;{\mu}g/m^3$ higher than the CMAQ model. In addition, the $PM_{10}$ concentration removed by the cloud liquid phase mechanism within the CMAQ-MADRID model was shown in the maximum $15\;{\mu}g/m^3$ in the Eastern Asia area.

Agricultural Policies and Geographical Specialization of Farming in England (영국의 농업정책이 지리적 전문화에 미친 영향 연구)

  • Kim, Ki-Hyuk
    • Journal of the Korean association of regional geographers
    • /
    • v.5 no.1
    • /
    • pp.101-120
    • /
    • 1999
  • The purpose of this study is to analyze the impact of agricultural polices on the change of regional structure based on the specialization during the productivism period. Analysis are carried on through the comparison of distribution in 1950s and 1997. Since the 1950s, governmental policy has played a leading role in shaping the pattern of farming in Great Britain. The range of British measures have also been employed in an attempt to improve the efficiency of agriculture and raise farm income. Three fairly distinct phase can be identified in the developing relationship between government policies and British agriculture in the postwar period. In the 1st phase, The Agricultural Act of 1947 laid the foundations for agricultural productivism in Great Britain until membership of the EC. This was to be achieved through the system of price support and guaranteed prices and the means of a series of grants and subsidies. Guaranteed prices encouraged farmenrs to intensify production and specialize in either cereal farming or milk-beef enterprise. The former favoured eastern areas, whereas the latter favoured western areas. Various grants and subsidies were made available to farmers during this period, again as a way of increasing efficiency and farm incomes. Many policies, such as Calf Subsidy and the Ploughing Grant, Hill cow and Hill Sheep Schemes and the Hill Farming and Livestock Rearing Grant was provided. Some of these policies favoured western uplands, whilst the others was biased towards the Lake District. Concentration of farms occured especially in near the London Metropolitan Area and south part of Scotland. In the 2nd stage after the membership of EC, very high guaranteed price created a relatively risk-free environment, so farmers intensified production and levels of self-sufficiency for most agriculture risen considerably. As farmers were being paid high prices for as much as they could produce, the policy favoured areas of larger-scale farming in eastern Britain. As a result of increasing regional disparities in agriculture, the CAP became more geographically sensitive in 1975 with the setting up of the Less Favoured Areas(LFAs). But they are biased towards the larger farms, because such farms have more crops and/or livestock, but small farms with low incomes are in most need of support. Specialization of cereals such wheat and barely was occured, but these two cereal crops have experienced rather different trend since 1950s. Under the CAP, farmers have been paid higher guaranteed prices for wheat than for barely because of the relative shortage of wheat in the EC. And more barely were cultivated as feedstuffs for livestock by home-grown cereals. In the 1950s dairying was already declining in what was to become the arable areas of southern and eastern England. By the mid-1980s, the pastral core had maintained its dominance, but the pastoral periphery had easily surpassed arable England as the second most important dairying district. Pig farming had become increasingly concentrated in intensive units in the main cereal areas of eastern England. These results show that the measure of agricultural policy induced the concentration and specialization implicitly. Measures for increasing demand, reducing supply or raising farm incomes are favoured by large scale farming. And price support induced specialization of farming. And technology for specialization are diffused and induced geographical specialization. This is the process of change of regional structure through the specialization.

  • PDF

Clinical Study of Acute and Chronic Pain by the Application of Magnetic Resonance Analyser $I_{TM}$ (자기공명분석기를 이용한 통증관리)

  • Park, Wook;Jin, Hee-Cheol;Cho, Myun-Hyun;Yoon, Suk-Jun;Lee, Jin-Seung;Lee, Jeong-Seok;Choi, Surk-Hwan;Kim, Sung-Yell
    • The Korean Journal of Pain
    • /
    • v.6 no.2
    • /
    • pp.192-198
    • /
    • 1993
  • In 1984, a magnetic resonance spectrometer(magnetic resonance analyser, MRA $I_{TM}$) was developed by Sigrid Lipsett and Ronald J. Weinstock in the USA, Biomedical applications of the spectrometer have been examined by Dr. Hoang Van Duc(pathologist, USC), and Nakamura, et al(Japan). From their theoretical views, the biophysical functions of this machine are to analyse and synthesize a healthy tissue and organ resonance pattern, and to detect and correct an abnormal tissue and organ resonance pattern. All of the above functions are based on Quantum physics. The healthy tissue and organ resonance patterns are predetermined as standard magnetic resonance patterns by digitizing values based on peak resonance emissions(response levels or high pitched echo-sounds amplified via human body). In clinical practice, a counter or neutralizing resonance pattern calculated by the spectrometer can correct a phase-shifted resonance pattern(response levels or low pitched echo-sounds) of a diseased tissue and organ. By administering the counter resonance pattern into the site of pain and trigger point, it is possible to readjust the phase-shifted resonance pattern and then to alleviate pain through regulation of the neurotransmitter function of the nervous system. For assessing clinical effectiveness of pain relief with MRA $I_{TM}$ this study was designed to estimate pain intensity by the patient's subjective verbal rating scale(VRS such as graded to no pain, mild, moderate and severe) before application of it, to evaluate an amount of pain relief as applied the spectrometer by the patients subjective pain relief scale(visual analogue scale, VAS, 0~100%), and then to observe a continuation of pain relief following its application for managing acute and chronic pain in the 102 patients during an 8 months period beginning March, 1993. An application time of the spectrometer ranged from 15 to 30 minutes daily in each patient at or near the site of pain and trigger point when the patient wanted to be treated. The subjects consisted of 54 males and 48 females, with the age distribution between 23~40 years in 29 cases, 41~60 years in 48 cases and 61~76 years in 25 cases respectively(Table 1). The kinds of diagnosis and the main site of pain, the duration of pain before the application, and the frequency of it's application were recorded on the Table 2, 3 and 4. A distinction between acute and chronic pain was defined according to both of the pain intervals lasting within and over 3 months. The results of application of the spectrometer were noted as follows; In 51 cases of acute pain before the application, the pain intensities were rated mild in 10 cases, moderate in 15 cases and severe in 26 cases. The amounts of pain relief were noted as between 30~50% in 9 cases, 51~70% in 13 cases and 71~95% in 29 cases. The continuation of pain relief appeared between 6~24 hours in two cases, 2~5 days in 10 cases, 6~14 days in 4 cases, 15 days in one case, and completely relived of pain in 34 cases(Table 5~7). In 51 cases of chronic pain before the application, the pain intensities were rated mild in 12 cases, moderate in l8 cases and severe in 21 cases. The amounts of pain relief were noted as between 0~50% in 10 cases, 51~70% in 27 cases and 71~90% in 14 cases. The continuation of pain relief appeared to have no effect in two cases. The level of effective duration was between 6~12 hours in two cases, 2~5 days in 11 cases, 6~14 days in 14 cases, 15~60 days in 9 cases and in 13 cases the patient was completely relieved of pain(Table 5~7). There were no complications in the patients except a mild reddening and tingling sensation of skin while applying the spectrometer. Total amounts of pain relief in all of the subjects were accounted as poor and fair in 19(18.6%) cases, good in 40(39.2%) cases and excellent in 43(42.2%) cases. The clinical effectiveness of MRA $I_{TM}$ showed variable distributions from no improvements to complete relief of pain by the patient's assessment. In conclusion, we suggest that MRA $I_{TM}$ may be successful in immediate and continued pain relief but still requires several treatments for continued relief and may be gradually effective in pain relief while being applied repeatedly.

  • PDF

Non-astronomical Tides and Monthly Mean Sea Level Variations due to Differing Hydrographic Conditions and Atmospheric Pressure along the Korean Coast from 1999 to 2017 (한국 연안에서 1999년부터 2017년까지 해수물성과 대기압 변화에 따른 계절 비천문조와 월평균 해수면 변화)

  • BYUN, DO-SEONG;CHOI, BYOUNG-JU;KIM, HYOWON
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.26 no.1
    • /
    • pp.11-36
    • /
    • 2021
  • The solar annual (Sa) and semiannual (Ssa) tides account for much of the non-uniform annual and seasonal variability observed in sea levels. These non-equilibrium tides depend on atmospheric variations, forced by changes in the Sun's distance and declination, as well as on hydrographic conditions. Here we employ tidal harmonic analyses to calculate Sa and Ssa harmonic constants for 21 Korean coastal tidal stations (TS), operated by the Korea Hydrographic and Oceanographic Agency. We used 19 year-long (1999 to 2017) 1 hr-interval sea level records from each site, and used two conventional harmonic analysis (HA) programs (Task2K and UTide). The stability of Sa harmonic constants was estimated with respect to starting date and record length of the data, and we examined the spatial distribution of the calculated Sa and Ssa harmonic constants. HA was performed on Incheon TS (ITS) records using 369-day subsets; the first start date was January 1, 1999, the subsequent data subset starting 24 hours later, and so on up until the final start date was December 27, 2017. Variations in the Sa constants produced by the two HA packages had similar magnitudes and start date sensitivity. Results from the two HA packages had a large difference in phase lag (about 78°) but relatively small amplitude (<1 cm) difference. The phase lag difference occurred in large part since Task2K excludes the perihelion astronomical variable. Sensitivity of the ITS Sa constants to data record length (i.e., 1, 2, 3, 5, 9, and 19 years) was also tested to determine the data length needed to yield stable Sa results. HA results revealed that 5 to 9 year sea level records could estimate Sa harmonic constants with relatively small error, while the best results are produced using 19 year-long records. As noted earlier, Sa amplitudes vary with regional hydrographic and atmospheric conditions. Sa amplitudes at the twenty one TS ranged from 15.0 to 18.6 cm, 10.7 to 17.5 cm, and 10.5 to 13.0 cm, along the west coast, south coast including Jejudo, and east coast including Ulleungdo, respectively. Except at Ulleungdo, it was found that the Ssa constituent contributes to produce asymmetric seasonal sea level variation and it delays (hastens) the highest (lowest) sea levels. Comparisons between monthly mean, air-pressure adjusted, and steric sea level variations revealed that year-to-year and asymmetric seasonal variations in sea levels were largely produced by steric sea level variation and inverted barometer effect.

Export Prediction Using Separated Learning Method and Recommendation of Potential Export Countries (분리학습 모델을 이용한 수출액 예측 및 수출 유망국가 추천)

  • Jang, Yeongjin;Won, Jongkwan;Lee, Chaerok
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.69-88
    • /
    • 2022
  • One of the characteristics of South Korea's economic structure is that it is highly dependent on exports. Thus, many businesses are closely related to the global economy and diplomatic situation. In addition, small and medium-sized enterprises(SMEs) specialized in exporting are struggling due to the spread of COVID-19. Therefore, this study aimed to develop a model to forecast exports for next year to support SMEs' export strategy and decision making. Also, this study proposed a strategy to recommend promising export countries of each item based on the forecasting model. We analyzed important variables used in previous studies such as country-specific, item-specific, and macro-economic variables and collected those variables to train our prediction model. Next, through the exploratory data analysis(EDA) it was found that exports, which is a target variable, have a highly skewed distribution. To deal with this issue and improve predictive performance, we suggest a separated learning method. In a separated learning method, the whole dataset is divided into homogeneous subgroups and a prediction algorithm is applied to each group. Thus, characteristics of each group can be more precisely trained using different input variables and algorithms. In this study, we divided the dataset into five subgroups based on the exports to decrease skewness of the target variable. After the separation, we found that each group has different characteristics in countries and goods. For example, In Group 1, most of the exporting countries are developing countries and the majority of exporting goods are low value products such as glass and prints. On the other hand, major exporting countries of South Korea such as China, USA, and Vietnam are included in Group 4 and Group 5 and most exporting goods in these groups are high value products. Then we used LightGBM(LGBM) and Exponential Moving Average(EMA) for prediction. Considering the characteristics of each group, models were built using LGBM for Group 1 to 4 and EMA for Group 5. To evaluate the performance of the model, we compare different model structures and algorithms. As a result, it was found that the separated learning model had best performance compared to other models. After the model was built, we also provided variable importance of each group using SHAP-value to add explainability of our model. Based on the prediction model, we proposed a second-stage recommendation strategy for potential export countries. In the first phase, BCG matrix was used to find Star and Question Mark markets that are expected to grow rapidly. In the second phase, we calculated scores for each country and recommendations were made according to ranking. Using this recommendation framework, potential export countries were selected and information about those countries for each item was presented. There are several implications of this study. First of all, most of the preceding studies have conducted research on the specific situation or country. However, this study use various variables and develops a machine learning model for a wide range of countries and items. Second, as to our knowledge, it is the first attempt to adopt a separated learning method for exports prediction. By separating the dataset into 5 homogeneous subgroups, we could enhance the predictive performance of the model. Also, more detailed explanation of models by group is provided using SHAP values. Lastly, this study has several practical implications. There are some platforms which serve trade information including KOTRA, but most of them are based on past data. Therefore, it is not easy for companies to predict future trends. By utilizing the model and recommendation strategy in this research, trade related services in each platform can be improved so that companies including SMEs can fully utilize the service when making strategies and decisions for exports.

A study of relationship between stomach cancer and selenoproteins in Korean human blood serum (한국인 혈청에서의 셀레노 단백질과 위암과의 상관관계 연구)

  • Park, Myungsun;Pak, Yong-Nam
    • Analytical Science and Technology
    • /
    • v.28 no.6
    • /
    • pp.417-424
    • /
    • 2015
  • In this study, the relationship between selenoprotein concentrations in blood and stomach cancer have been searched for Korean. The concentration of each selenoprotein in blood serum was analyzed and the correlation between the concentration and stomach cancer was studied to find a potential for using Selenium as a biomarker. In concentration determination, a simple calibration curve method was used with the monitoring of m/z 78 without the use of solid phase extraction. This is a lot more simple than the method using SPE with post column isotope dilution. The result obtained from the analysis of CRM BCR-637, 72.20±3.35 ng·g−1, showed similar value of reference value (81±7 ng·g−1). The total concentration of Se for the controlled group, cardiovascular patients group, was 105.70±21.20 ng·g−1. This value was the same as normal healthy person reported earlier. Each selenoprotein concentration of GPx, SelP and SeAlb was 26.12±7.84, 65.15±14.50, 14.43±6.99 ng·g−1, respectively. The distribution of each selenoprotein was 24.7%, 61.6%, and 13.7%, which was similar to the normal person. The result of stomach cancer patients, the total concentration of Se was 76.11±28.12 ng·g−1 and each concentration of GPx, SelP and SeAlb was 15.41±9.01, 50.83±17.91, and 9.87±5.21 ng·g−1, respectively. The total and each selenoprotein concentration level showed significant decrease for the stomach cancer patients. The level of decrease was 41.0% for GPx, 22.0% for SelP, and 31.6% for SeAlb. However, the distribution of each selenoprotein was not much different. Either total Selenium or each selenoprotein could be used as a possible index for the diagnosis of cancer. However, in age group study, it is shown that young age group (30's-40's) did not show much difference.