• Title/Summary/Keyword: Complex Parameter

Search Result 769, Processing Time 0.037 seconds

About Short-stacking Effect of Illite-smectite Mixed Layers (일라이트-스멕타이트 혼합층광물의 단범위적층효과에 대한 고찰)

  • Kang, Il-Mo
    • Economic and Environmental Geology
    • /
    • v.45 no.2
    • /
    • pp.71-78
    • /
    • 2012
  • Illite-smectite mixed layers (I-S) occurring authigenically in diagenetic and hydrothermal environments reacts toward more illite-rich phases as temperature and potassium ion concentration increase. For that reason, I-S is often used as geothermometry and/or geochronometry at the field of hydrocarbons or ore minerals exploration. Generally, I-S shows X-ray powder diffraction (XRD) patterns of ultra-thin lamellar structures, which consist of restricted numbers of sillicate layers (normally, 5 ~ 15 layers) stacked in parallel to a-b planes. This ultra-thinness is known to decrease I-S expandability (%S) rather than theoretically expected one (short-stacking effect). We attempt here to quantify the short stacking effect of I-S using the difference of two types of expandability: one type is a maximum expandability ($%S_{Max}$) of infinite stacks of fundamental particles (physically inseparable smallest units), and the other type is an expandability of finite particle stacks normally measured using X-ray powder diffraction (XRD) ($%S_{XRD}$). Eleven I-S samples from the Geumseongsan volcanic complex, Uiseong, Gyeongbuk, have been analyzed for measuring $%S_{XRD}$ and average coherent scattering thickness (CST) after size separation under 1 ${\mu}m$. Average fundamental particle thickness ($N_f$) and $%S_{Max}$ have been determined from $%S_{XRD}$ and CST using inter-parameter relationships of I-S layer structures. The discrepancy between $%S_{Max}$ and $%S_{XRD}$ (${\Delta}%S$) suggests that the maximum short-stacking effect happens approximately at 20 $%S_{XRD}$, of which point represents I-S layer structures consisting of ca. average 3-layered fundamental particles ($N_f{\approx}3$). As a result of inferring the $%S_{XRD}$ range of each Reichweite using the $%S_{XRD}$ vs. $N_f$ diagram of Kang et al. (2002), we can confirms that the fundamental particle thickness is a determinant factor for I-S Reichweite, and also that the short-stacking effect shifts the $%S_{XRD}$ range of each Reichweite toward smaller $%S_{XRD}$ values than those that can be theoretically prospected using junction probability.

Characteristics of Vertical Ozone Distributions in the Pohang Area, Korea (포항지역 오존의 수직분포 특성)

  • Kim, Ji-Young;Youn, Yong-Hoon;Song, Ki-Bum;Kim, Ki-Hyun
    • Journal of the Korean earth science society
    • /
    • v.21 no.3
    • /
    • pp.287-301
    • /
    • 2000
  • In order to investigate the factors and processes affecting the vertical distributions of ozone, we analyzed the ozone profile data measured using ozonesonde from 1995 to 1997 at Pohang city, Korea. In the course of our study, we analyzed temporal and spatial distribution characteristics of ozone at four different heights: surface (100m), troposphere (10km), lower stratosphere (20km), and middle stratosphere (30km). Despite its proximity to a local, but major, industrial complex known as Pohang Iron and Steel Co. (POSCO), the concentrations of surface ozone in the study area were comparable to those typically observed from rural and/or unpolluted area. In addition, the findings of relative enhancement of ozone at this height, especially between spring and summer may be accounted for by the prevalence of photochemical reactions during that period of year. The temporal distribution patterns for both 10 and 20km heights were quite compatible despite large differences in their altitudes with such consistency as spring maxima and summer minima. Explanations for these phenomena may be sought by the mixed effects of various processes including: ozone transport across two heights, photochemical reaction, the formation of inversion layer, and so on. However, the temporal distribution pattern for the middle stratosphere (30km) was rather comparable to that of the surface. We also evaluated total ozone concentration of the study area using Brewer spectrophotometer. The total ozone concentration data were compared with those derived by combining the data representing stratospheric layers via Umkehr method. The results of correlation analysis showed that total ozone is negatively correlated with cloud cover but not with such parameter as UV-B. Based on our study, we conclude that areal characteristics of Pohang which represents a typical coastal area may be quite important in explaining the distribution patterns of ozone not only from surface but also from upper atmosphere.

  • PDF

A Study on groundwater and pollutant recharge in urban area: use of hydrochemical data

  • Lee, Ju-Hee;Kwon, Jang-Soon;Yun, Seong-Taek;Chae, Gi-Tak;Park, Seong-Sook
    • Proceedings of the Korean Society of Soil and Groundwater Environment Conference
    • /
    • 2004.09a
    • /
    • pp.119-120
    • /
    • 2004
  • Urban groundwater has a unique hydrologic system because of the complex surface and subsurface infrastructures such as deep foundation of many high buildings, subway systems, and sewers and public water supply systems. It generally has been considered that increased surface impermeability reduces the amount of groundwater recharge. On the other hand, leaks from sewers and public water supply systems may generate the large amounts of recharges. All of these urban facilities also may change the groundwater quality by the recharge of a myriad of contaminants. This study was performed to determine the factors controlling the recharge of deep groundwater in an urban area, based on the hydrogeochemical characteristics. The term ‘contamination’ in this study means any kind of inflow of shallow groundwater regardless of clean or contaminated. For this study, urban groundwater samples were collected from a total of 310 preexisting wells with the depth over 100 m. Random sampling method was used to select the wells for this study. Major cations together with Si, Al, Fe, Pb, Hg and Mn were analyzed by ICP-AES, and Cl, N $O_3$, N $H_4$, F, Br, S $O_4$and P $O_4$ were analyzed by IC. There are two groups of groundwater, based on hydrochemical characteristics. The first group is distributed broadly from Ca-HC $O_3$ type to Ca-C1+N $O_3$ type; the other group is the Na+K-HC $O_3$ type. The latter group is considered to represent the baseline quality of deep groundwater in the study area. Using the major ions data for the Na+K-HC $O_3$ type water, we evaluated the extent of groundwater contamination, assuming that if subtract the baseline composition from acquired data for a specific water, the remaining concentrations may indicate the degree of contamination. The remainder of each solute for each sample was simply averaged. The results showed that both Ca and HC $O_3$ represent the typical solutes which are quite enriched in urban groundwater. In particular, the P$CO_2$ values calculated using PHREEQC (version 2.8) showed a correlation with the concentrations of maior inorganic components (Na, Mg, Ca, N $O_3$, S $O_4$, etc.). The p$CO_2$ values for the first group waters widely ranged between about 10$^{-3.0}$ atm to 10$^{-1.0}$ atm and differed from those of the background water samples belonging to the Na+K-HC $O_3$ type (<10$^{-3.5}$ atm). Considering that the p$CO_2$ of soil water (near 10$^{-1.5}$ atm), this indicates that inflow of shallow water is very significant in deep groundwaters in the study area. Furthermore, the P$CO_2$ values can be used as an effective parameter to estimate the relative recharge of shallow water and thus the contamination susceptibility. The results of our present study suggest that down to considerable depth, urban groundwater in crystalline aquifer may be considerably affected by the recharge of shallow water (and pollutants) from an adjacent area. We also suggest that for such evaluation, careful examination of systematically collected hydrochemical data is requisite as an effective tool, in addition to hydrologic and hydrogeologic interpretation.ion.ion.

  • PDF

Development of a Dose Calibration Program for Various Dosimetry Protocols in High Energy Photon Beams (고 에너지 광자선의 표준측정법에 대한 선량 교정 프로그램 개발)

  • Shin Dong Oh;Park Sung Yong;Ji Young Hoon;Lee Chang Geon;Suh Tae Suk;Kwon Soo IL;Ahn Hee Kyung;Kang Jin Oh;Hong Seong Eon
    • Radiation Oncology Journal
    • /
    • v.20 no.4
    • /
    • pp.381-390
    • /
    • 2002
  • Purpose : To develop a dose calibration program for the IAEA TRS-277 and AAPM TG-21, based on the air kerma calibration factor (or the cavity-gas calibration factor), as well as for the IAEA TRS-398 and the AAPM TG-51, based on the absorbed dose to water calibration factor, so as to avoid the unwanted error associated with these calculation procedures. Materials and Methods : Currently, the most widely used dosimetry Protocols of high energy photon beams are the air kerma calibration factor based on the IAEA TRS-277 and the AAPM TG-21. However, this has somewhat complex formalism and limitations for the improvement of the accuracy due to uncertainties of the physical quantities. Recently, the IAEA and the AAPM published the absorbed dose to water calibration factor based, on the IAEA TRS-398 and the AAPM TG-51. The formalism and physical parameters were strictly applied to these four dose calibration programs. The tables and graphs of physical data and the information for ion chambers were numericalized for their incorporation into a database. These programs were developed user to be friendly, with the Visual $C^{++}$ language for their ease of use in a Windows environment according to the recommendation of each protocols. Results : The dose calibration programs for the high energy photon beams, developed for the four protocols, allow the input of informations about a dosimetry system, the characteristics of the beam quality, the measurement conditions and dosimetry results, to enable the minimization of any inter-user variations and errors, during the calculation procedure. Also, it was possible to compare the absorbed dose to water data of the four different protocols at a single reference points. Conclusion : Since this program expressed information in numerical and data-based forms for the physical parameter tables, graphs and of the ion chambers, the error associated with the procedures and different user could be solved. It was possible to analyze and compare the major difference for each dosimetry protocol, since the program was designed to be user friendly and to accurately calculate the correction factors and absorbed dose. It is expected that accurate dose calculations in high energy photon beams can be made by the users for selecting and performing the appropriate dosimetry protocol.

The PRISM-based Rainfall Mapping at an Enhanced Grid Cell Resolution in Complex Terrain (복잡지형 고해상도 격자망에서의 PRISM 기반 강수추정법)

  • Chung, U-Ran;Yun, Kyung-Dahm;Cho, Kyung-Sook;Yi, Jae-Hyun;Yun, Jin-I.
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.11 no.2
    • /
    • pp.72-78
    • /
    • 2009
  • The demand for rainfall data in gridded digital formats has increased in recent years due to the close linkage between hydrological models and decision support systems using the geographic information system. One of the most widely used tools for digital rainfall mapping is the PRISM (parameter-elevation regressions on independent slopes model) which uses point data (rain gauge stations), a digital elevation model (DEM), and other spatial datasets to generate repeatable estimates of monthly and annual precipitation. In the PRISM, rain gauge stations are assigned with weights that account for other climatically important factors besides elevation, and aspects and the topographic exposure are simulated by dividing the terrain into topographic facets. The size of facet or grid cell resolution is determined by the density of rain gauge stations and a $5{\times}5km$ grid cell is considered as the lowest limit under the situation in Korea. The PRISM algorithms using a 270m DEM for South Korea were implemented in a script language environment (Python) and relevant weights for each 270m grid cell were derived from the monthly data from 432 official rain gauge stations. Weighted monthly precipitation data from at least 5 nearby stations for each grid cell were regressed to the elevation and the selected linear regression equations with the 270m DEM were used to generate a digital precipitation map of South Korea at 270m resolution. Among 1.25 million grid cells, precipitation estimates at 166 cells, where the measurements were made by the Korea Water Corporation rain gauge network, were extracted and the monthly estimation errors were evaluated. An average of 10% reduction in the root mean square error (RMSE) was found for any months with more than 100mm monthly precipitation compared to the RMSE associated with the original 5km PRISM estimates. This modified PRISM may be used for rainfall mapping in rainy season (May to September) at much higher spatial resolution than the original PRISM without losing the data accuracy.

A Sensitivity Analysis of JULES Land Surface Model for Two Major Ecosystems in Korea: Influence of Biophysical Parameters on the Simulation of Gross Primary Productivity and Ecosystem Respiration (한국의 두 주요 생태계에 대한 JULES 지면 모형의 민감도 분석: 일차생산량과 생태계 호흡의 모사에 미치는 생물리모수의 영향)

  • Jang, Ji-Hyeon;Hong, Jin-Kyu;Byun, Young-Hwa;Kwon, Hyo-Jung;Chae, Nam-Yi;Lim, Jong-Hwan;Kim, Joon
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.12 no.2
    • /
    • pp.107-121
    • /
    • 2010
  • We conducted a sensitivity test of Joint UK Land Environment Simulator (JULES), in which the influence of biophysical parameters on the simulation of gross primary productivity (GPP) and ecosystem respiration (RE) was investigated for two typical ecosystems in Korea. For this test, we employed the whole-year observation of eddy-covariance fluxes measured in 2006 at two KoFlux sites: (1) a deciduous forest in complex terrain in Gwangneung and (2) a farmland with heterogeneous mosaic patches in Haenam. Our analysis showed that the simulated GPP was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration for both ecosystems. RE was sensitive to wood biomass parameter for the deciduous forest in Gwangneung. For the mixed farmland in Haenam, however, RE was most sensitive to the maximum rate of RuBP carboxylation and leaf nitrogen concentration like the simulated GPP. For both sites, the JULES model overestimated both GPP and RE when the default values of input parameters were adopted. Considering the fact that the leaf nitrogen concentration observed at the deciduous forest site was only about 60% of its default value, the significant portion of the model's overestimation can be attributed to such a discrepancy in the input parameters. Our finding demonstrates that the abovementioned key biophysical parameters of the two ecosystems should be evaluated carefully prior to any simulation and interpretation of ecosystem carbon exchange in Korea.

Evaluating Global Container Ports' Performance Considering the Port Calls' Attractiveness (기항 매력도를 고려한 세계 컨테이너 항만의 성과 평가)

  • Park, Byungin
    • Journal of Korea Port Economic Association
    • /
    • v.38 no.3
    • /
    • pp.105-131
    • /
    • 2022
  • Even after the improvement in 2019, UNCTAD's Liner Shipping Connectivity Index (LSCI), which evaluates the performance of the global container port market, has limited use. In particular, since the liner shipping connectivity index evaluates the performance based only on the distance of the relationship, the performance index combining the port attractiveness of calling would be more efficient. This study used the modified Huff model, the hub-authority algorithm and the eigenvector centrality of social network analysis, and correlation analysis for 2007, 2017, and 2019 data of Ocean-Commerce, Japan. The findings are as follows: Firstly, the port attractiveness of calling and the overall performance of the port did not always match. However, according to the analysis of the attractiveness of a port calling, Busan remained within the top 10. Still, the attractiveness among other Korean ports improved slowly from the low level during the study period. Secondly, Global container ports are generally specialized for long-term specialized inbound and outbound ports by the route and grow while maintaining professionalism throughout the entire period. The Korean ports continue to change roles from analysis period to period. Lastly, the volume of cargo by period and the extended port connectivity index (EPCI) presented in this study showed a correlation from 0.77 to 0.85. Even though the Atlantic data is excluded from the analysis and the ship's operable capacity is used instead of the port throughput volume, it shows a high correlation. The study result would help evaluate and analyze global ports. According to the study, Korean ports need a long-term strategy to improve performance while maintaining professionalism. In order to maintain and develop the port's desirable role, it is necessary to utilize cooperation and partnerships with the complimentary port and attract shipping companies' services calling to the complementary port. Although this study carried out a complex analysis using a lot of data and methodologies for an extended period, it is necessary to conduct a study covering ports around the world, a long-term panel analysis, and a scientific parameter estimation study of the attractiveness analysis.

The Impact of Market Environments on Optimal Channel Strategy Involving an Internet Channel: A Game Theoretic Approach (시장 환경이 인터넷 경로를 포함한 다중 경로 관리에 미치는 영향에 관한 연구: 게임 이론적 접근방법)

  • Yoo, Weon-Sang
    • Journal of Distribution Research
    • /
    • v.16 no.2
    • /
    • pp.119-138
    • /
    • 2011
  • Internet commerce has been growing at a rapid pace for the last decade. Many firms try to reach wider consumer markets by adding the Internet channel to the existing traditional channels. Despite the various benefits of the Internet channel, a significant number of firms failed in managing the new type of channel. Previous studies could not cleary explain these conflicting results associated with the Internet channel. One of the major reasons is most of the previous studies conducted analyses under a specific market condition and claimed that as the impact of Internet channel introduction. Therefore, their results are strongly influenced by the specific market settings. However, firms face various market conditions in the real worlddensity and disutility of using the Internet. The purpose of this study is to investigate the impact of various market environments on a firm's optimal channel strategy by employing a flexible game theory model. We capture various market conditions with consumer density and disutility of using the Internet.

    shows the channel structures analyzed in this study. Before the Internet channel is introduced, a monopoly manufacturer sells its products through an independent physical store. From this structure, the manufacturer could introduce its own Internet channel (MI). The independent physical store could also introduce its own Internet channel and coordinate it with the existing physical store (RI). An independent Internet retailer such as Amazon could enter this market (II). In this case, two types of independent retailers compete with each other. In this model, consumers are uniformly distributed on the two dimensional space. Consumer heterogeneity is captured by a consumer's geographical location (ci) and his disutility of using the Internet channel (${\delta}_{N_i}$).
    shows various market conditions captured by the two consumer heterogeneities.
    (a) illustrates a market with symmetric consumer distributions. The model captures explicitly the asymmetric distributions of consumer disutility in a market as well. In a market like that is represented in
    (c), the average consumer disutility of using an Internet store is relatively smaller than that of using a physical store. For example, this case represents the market in which 1) the product is suitable for Internet transactions (e.g., books) or 2) the level of E-Commerce readiness is high such as in Denmark or Finland. On the other hand, the average consumer disutility when using an Internet store is relatively greater than that of using a physical store in a market like (b). Countries like Ukraine and Bulgaria, or the market for "experience goods" such as shoes, could be examples of this market condition. summarizes the various scenarios of consumer distributions analyzed in this study. The range for disutility of using the Internet (${\delta}_{N_i}$) is held constant, while the range of consumer distribution (${\chi}_i$) varies from -25 to 25, from -50 to 50, from -100 to 100, from -150 to 150, and from -200 to 200.
    summarizes the analysis results. As the average travel cost in a market decreases while the average disutility of Internet use remains the same, average retail price, total quantity sold, physical store profit, monopoly manufacturer profit, and thus, total channel profit increase. On the other hand, the quantity sold through the Internet and the profit of the Internet store decrease with a decreasing average travel cost relative to the average disutility of Internet use. We find that a channel that has an advantage over the other kind of channel serves a larger portion of the market. In a market with a high average travel cost, in which the Internet store has a relative advantage over the physical store, for example, the Internet store becomes a mass-retailer serving a larger portion of the market. This result implies that the Internet becomes a more significant distribution channel in those markets characterized by greater geographical dispersion of buyers, or as consumers become more proficient in Internet usage. The results indicate that the degree of price discrimination also varies depending on the distribution of consumer disutility in a market. The manufacturer in a market in which the average travel cost is higher than the average disutility of using the Internet has a stronger incentive for price discrimination than the manufacturer in a market where the average travel cost is relatively lower. We also find that the manufacturer has a stronger incentive to maintain a high price level when the average travel cost in a market is relatively low. Additionally, the retail competition effect due to Internet channel introduction strengthens as average travel cost in a market decreases. This result indicates that a manufacturer's channel power relative to that of the independent physical retailer becomes stronger with a decreasing average travel cost. This implication is counter-intuitive, because it is widely believed that the negative impact of Internet channel introduction on a competing physical retailer is more significant in a market like Russia, where consumers are more geographically dispersed, than in a market like Hong Kong, that has a condensed geographic distribution of consumers.
    illustrates how this happens. When mangers consider the overall impact of the Internet channel, however, they should consider not only channel power, but also sales volume. When both are considered, the introduction of the Internet channel is revealed as more harmful to a physical retailer in Russia than one in Hong Kong, because the sales volume decrease for a physical store due to Internet channel competition is much greater in Russia than in Hong Kong. The results show that manufacturer is always better off with any type of Internet store introduction. The independent physical store benefits from opening its own Internet store when the average travel cost is higher relative to the disutility of using the Internet. Under an opposite market condition, however, the independent physical retailer could be worse off when it opens its own Internet outlet and coordinates both outlets (RI). This is because the low average travel cost significantly reduces the channel power of the independent physical retailer, further aggravating the already weak channel power caused by myopic inter-channel price coordination. The results implies that channel members and policy makers should explicitly consider the factors determining the relative distributions of both kinds of consumer disutility, when they make a channel decision involving an Internet channel. These factors include the suitability of a product for Internet shopping, the level of E-Commerce readiness of a market, and the degree of geographic dispersion of consumers in a market. Despite the academic contributions and managerial implications, this study is limited in the following ways. First, a series of numerical analyses were conducted to derive equilibrium solutions due to the complex forms of demand functions. In the process, we set up V=100, ${\lambda}$=1, and ${\beta}$=0.01. Future research may change this parameter value set to check the generalizability of this study. Second, the five different scenarios for market conditions were analyzed. Future research could try different sets of parameter ranges. Finally, the model setting allows only one monopoly manufacturer in the market. Accommodating competing multiple manufacturers (brands) would generate more realistic results.

  • PDF
  • A Refined Method for Quantification of Myocardial Blood Flow using N-13 Ammonia and Dynamic PET (N-13 암모니아와 양전자방출단층촬영 동적영상을 이용하여 심근혈류량을 정량화하는 새로운 방법 개발에 관한 연구)

    • Kim, Joon-Young;Lee, Kyung-Han;Kim, Sang-Eun;Choe, Yearn-Seong;Ju, Hee-Kyung;Kim, Yong-Jin;Kim, Byung-Tae;Choi, Yong
      • The Korean Journal of Nuclear Medicine
      • /
      • v.31 no.1
      • /
      • pp.73-82
      • /
      • 1997
    • Regional myocardial blood flow (rMBF) can be noninvasively quantified using N-13 ammonia and dynamic positron emission tomography (PET). The quantitative accuracy of the rMBF values, however, is affected by the distortion of myocardial PET images caused by finite PET image resolution and cardiac motion. Although different methods have been developed to correct the distortion typically classified as partial volume effect and spillover, the methods are too complex to employ in a routine clinical environment. We have developed a refined method incorporating a geometric model of the volume representation of a region-of-interest (ROI) into the two-compartment N-13 ammonia model. In the refined model, partial volume effect and spillover are conveniently corrected by an additional parameter in the mathematical model. To examine the accuracy of this approach, studies were performed in 9 coronary artery disease patients. Dynamic transaxial images (16 frames) were acquired with a GE $Advance^{TM}$ PET scanner simultaneous with intravenous injection of 20 mCi N-13 ammonia. rMBF was examined at rest and during pharmacologically (dipyridamole) induced coronary hyperemia. Three sectorial myocardium (septum, anterior wall and lateral wall) and blood pool time-activity curves were generated using dynamic images from manually drawn ROIs. The accuracy of rMBF values estimated by the refined method was examined by comparing to the values estimated using the conventional two-compartment model without partial volume effect correction rMBF values obtained by the refined method linearly correlated with rMBF values obtained by the conventional method (108 myocardial segments, correlation coefficient (r)=0.88). Additionally, underestimated rMBF values by the conventional method due to partial volume effect were corrected by theoretically predicted amount in the refined method (slope(m)=1.57). Spillover fraction estimated by the two methods agreed well (r=1.00, m=0.98). In conclusion, accurate rMBF values can be efficiently quantified by the refined method incorporating myocardium geometric information into the two-compartment model using N-13 ammonia and PET.

    • PDF

    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.