• Title/Summary/Keyword: Modeling Methods

Search Result 3,804, Processing Time 0.03 seconds

Numerical Analysis of Coupled Thermo-Hydro-Mechanical (THM) Behavior at Korean Reference Disposal System (KRS) Using TOUGH2-MP/FLAC3D Simulator (TOUGH2-MP/FLAC3D를 이용한 한국형 기준 처분시스템에서의 열-수리-역학적 복합거동 특성 평가)

  • Lee, Changsoo;Cho, Won-Jin;Lee, Jaewon;Kim, Geon Young
    • Journal of Nuclear Fuel Cycle and Waste Technology(JNFCWT)
    • /
    • v.17 no.2
    • /
    • pp.183-202
    • /
    • 2019
  • For design and performance assessment of a high-level radioactive waste (HLW) disposal system, it is necessary to understand the characteristics of coupled thermo-hydro-mechanical (THM) behavior. However, in previous studies for the Korean Reference HLW Disposal System (KRS), thermal analysis was performed to determine the spacing of disposal tunnels and interval of disposition holes without consideration of the coupled THM behavior. Therefore, in this study, TOUGH2-MP/FLAC3D is used to conduct THM modeling for performance assessment of the Korean Reference HLW Disposal System (KRS). The peak temperature remains below the temperature limit of $100^{\circ}C$ for the whole period. A rapid rise of temperature caused by decay heat occurs in the early years, and then temperature begins to decrease as decay heat from the waste decreases. The peak temperature at the bentonite buffer is around $96.2^{\circ}C$ after about 3 years, and peak temperature at the rockmass is $68.2^{\circ}C$ after about 17 years. Saturation of the bentonite block near the canister decreases in the early stage, because water evaporation occurs owing to temperature increase. Then, saturation of the bentonite buffer and backfill increases because of water intake from the rockmass, and bentonite buffer and backfill are fully saturated after about 266 years. The stress is calculated to investigate the effect of thermal stress and swelling pressure on the mechanical behavior of the rockmass. The calculated stress is compared to a spalling criterion and the Mohr-Coulumb criterion for investigation of potential failure. The stress at the rockmass remains below the spalling strength and Mohr-Coulumb criterion for the whole period. The methodology of using the TOUGH2-MP/FLAC3D simulator can be applied to predict the long-term behavior of the KRS under various conditions; these methods will be useful for the design and performance assessment of alternative concepts such as multi-layer and multi-canister concepts for geological spent fuel repositories.

Predicting Forest Gross Primary Production Using Machine Learning Algorithms (머신러닝 기법의 산림 총일차생산성 예측 모델 비교)

  • Lee, Bora;Jang, Keunchang;Kim, Eunsook;Kang, Minseok;Chun, Jung-Hwa;Lim, Jong-Hwan
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.21 no.1
    • /
    • pp.29-41
    • /
    • 2019
  • Terrestrial Gross Primary Production (GPP) is the largest global carbon flux, and forest ecosystems are important because of the ability to store much more significant amounts of carbon than other terrestrial ecosystems. There have been several attempts to estimate GPP using mechanism-based models. However, mechanism-based models including biological, chemical, and physical processes are limited due to a lack of flexibility in predicting non-stationary ecological processes, which are caused by a local and global change. Instead mechanism-free methods are strongly recommended to estimate nonlinear dynamics that occur in nature like GPP. Therefore, we used the mechanism-free machine learning techniques to estimate the daily GPP. In this study, support vector machine (SVM), random forest (RF) and artificial neural network (ANN) were used and compared with the traditional multiple linear regression model (LM). MODIS products and meteorological parameters from eddy covariance data were employed to train the machine learning and LM models from 2006 to 2013. GPP prediction models were compared with daily GPP from eddy covariance measurement in a deciduous forest in South Korea in 2014 and 2015. Statistical analysis including correlation coefficient (R), root mean square error (RMSE) and mean squared error (MSE) were used to evaluate the performance of models. In general, the models from machine-learning algorithms (R = 0.85 - 0.93, MSE = 1.00 - 2.05, p < 0.001) showed better performance than linear regression model (R = 0.82 - 0.92, MSE = 1.24 - 2.45, p < 0.001). These results provide insight into high predictability and the possibility of expansion through the use of the mechanism-free machine-learning models and remote sensing for predicting non-stationary ecological processes such as seasonal GPP.

Comparison on Patterns of Conflicts in the South China Sea and the East China Sea through Analysis on Mechanism of Chinese Gray Zone Strategy (중국의 회색지대전략 메커니즘 분석을 통한 남중국해 및 동중국해 분쟁 양상 비교: 시계열 데이터에 근거한 경험적 연구를 중심으로)

  • Cho, Yongsu
    • Maritime Security
    • /
    • v.1 no.1
    • /
    • pp.273-310
    • /
    • 2020
  • This study aims at empirically analyzing the overall mechanism of the "Gray Zone Strategy", which has begun to be used as one of Chinese major maritime security strategies in maritime conflicts surrounding the South China Sea and East China Sea since early 2010, and comparing the resulting conflict patterns in those reg ions. To this end, I made the following two hypotheses about Chinese gray zone strategy. The hypotheses that I have argued in this study are the first, "The marine gray zone strategy used by China shows different structures of implementation in the South China Sea and the East China Sea, which are major conflict areas.", the second, "Therefore, the patterns of disputes in the South China Sea and the East China Sea also show a difference." In order to examine this, I will classify Chinese gray zone strategy mechanisms multi-dimensionally in large order, 1) conflict trends and frequency of strategy execution, 2) types and strengths of strategy, 3) actors of strategy execution, and 4) response methods of counterparts. So, I tried to collect data related to this based on quantitative modeling to test these. After that, about 10 years of data pertaining to this topic were processed, and a research model was designed with a new categorization and operational definition of gray zone strategies. Based on this, I was able to successfully test all the hypotheses by successfully comparing the comprehensive mechanisms of the gray zone strategy used by China and the conflict patterns between the South China Sea and the East China Sea. In the conclusion, the verified results were rementioned with emphasizing the need to overcome the security vulnerabilities in East Asia that could be caused by China's marine gray zone strategy. This study, which has never been attempted so far, is of great significance in that it clarified the intrinsic structure in which China's gray zone strategy was implemented using empirical case studies, and the correlation between this and maritime conflict patterns was investigated.

  • PDF

Structural relationship among justice of non-face-to-face exam, trust, and satisfaction with university (치위생(학)과 학생이 지각한 비대면 시험의 공정성, 시험 불안 및 학교 신뢰 간의 구조적 관계)

  • Hyeong-Mi Kim;Chang-Hee Kim;Jeong-Hee Kim
    • Journal of Korean Dental Hygiene Science
    • /
    • v.6 no.1
    • /
    • pp.37-50
    • /
    • 2023
  • Background: This study investigated the structural relationships among justice, test anxiety, and school reliability s non-face-to-face tests of dental hygiene students. Methods: A survey was conducted with 267 dental hygiene students. The survey items included general characteristics, opinions on evaluation, the fairness of non-face-to-face tests (distributive, procedural, and interactional justice), school satisfaction, and school reliability. For statistical analysis, independent-sample t-tests, one-way ANOVA, and structural modeling analyses were performed. Results: Among factors that directly affected distributive justice and reliability towards non-face-to-face tests, the higher the interactional justice (β=0.401, p<0.001) and distributive justice (β=0.232, p=0.002) levels, the higher the school satisfaction. The higher the school satisfaction (β=0.606, p<0.001) and procedural justice (β=0.299, p<0.001) levels, the higher the perceived reliability of the school. Factors that indirectly affected school reliability included interactional justice (β=0.243, p=0.010) and distributive justice (β=0.141, p=0.010). Interactional justice (β=0.592, p=0.010) and distributive justice (β=0.208, p=0.010) were the factors affecting school satisfaction. Moreover, factors that influenced school reliability were distributive justice (β=0.56, p=0.010), interactional justice (β=0.332, p=0.010), procedural justice (β=0.229, p=0.010), and distributive justice (β=0.116, p=0.010). Conclusions: Students will trust and be satisfied with schools when schools and professors sufficiently provide information on face-to-face tests and ensure proper procedures to achieve reasonable grades as rewards for exerted time and effort. Furthermore, this study provides a reference base for developing a variety of content for fair, non-face-to-face tests, thereby allowing students to trust their schools.

MDP(Markov Decision Process) Model for Prediction of Survivor Behavior based on Topographic Information (지형정보 기반 조난자 행동예측을 위한 마코프 의사결정과정 모형)

  • Jinho Son;Suhwan Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.2
    • /
    • pp.101-114
    • /
    • 2023
  • In the wartime, aircraft carrying out a mission to strike the enemy deep in the depth are exposed to the risk of being shoot down. As a key combat force in mordern warfare, it takes a lot of time, effot and national budget to train military flight personnel who operate high-tech weapon systems. Therefore, this study studied the path problem of predicting the route of emergency escape from enemy territory to the target point to avoid obstacles, and through this, the possibility of safe recovery of emergency escape military flight personnel was increased. based problem, transforming the problem into a TSP, VRP, and Dijkstra algorithm, and approaching it with an optimization technique. However, if this problem is approached in a network problem, it is difficult to reflect the dynamic factors and uncertainties of the battlefield environment that military flight personnel in distress will face. So, MDP suitable for modeling dynamic environments was applied and studied. In addition, GIS was used to obtain topographic information data, and in the process of designing the reward structure of MDP, topographic information was reflected in more detail so that the model could be more realistic than previous studies. In this study, value iteration algorithms and deterministic methods were used to derive a path that allows the military flight personnel in distress to move to the shortest distance while making the most of the topographical advantages. In addition, it was intended to add the reality of the model by adding actual topographic information and obstacles that the military flight personnel in distress can meet in the process of escape and escape. Through this, it was possible to predict through which route the military flight personnel would escape and escape in the actual situation. The model presented in this study can be applied to various operational situations through redesign of the reward structure. In actual situations, decision support based on scientific techniques that reflect various factors in predicting the escape route of the military flight personnel in distress and conducting combat search and rescue operations will be possible.

Modeling of Estimating Soil Moisture, Evapotranspiration and Yield of Chinese Cabbages from Meteorological Data at Different Growth Stages (기상자료(氣象資料)에 의(依)한 배추 생육시기별(生育時期別) 토양수분(土壤水分), 증발산량(蒸發散量) 및 수량(收量)의 추정모형(推定模型))

  • Im, Jeong-Nam;Yoo, Soon-Ho
    • Korean Journal of Soil Science and Fertilizer
    • /
    • v.21 no.4
    • /
    • pp.386-408
    • /
    • 1988
  • A study was conducted to develop a model for estimating evapotranspiration and yield of Chinese cabbages from meteorological factors from 1981 to 1986 in Suweon, Korea. Lysimeters with water table maintained at 50cm depth were used to measure the potential evapotranspiration and the maximum evapotranspiration in situ. The actual evapotranspiration and the yield were measured in the field plots irrigated with different soil moisture regimes of -0.2, -0.5, and -1.0 bars, respectively. The soil water content throughout the profile was monitored by a neutron moisture depth gauge and the soil water potentials were measured using gypsum block and tensiometer. The fresh weight of Chinese cabbages at harvest was measured as yield. The data collected in situ were analyzed to obtain parameters related to modeling. The results were summarized as followings: 1. The 5-year mean of potential evapotranspiration (PET) gradually increased from 2.38 mm/day in early April to 3.98 mm/day in mid-June, and thereafter, decreased to 1.06 mm/day in mid-November. The estimated PET by Penman, Radiation or Blanney-Criddle methods were overestimated in comparison with the measured PET, while those by Pan-evaporation method were underestimated. The correlation between the estimated and the measured PET, however, showed high significance except for July and August by Blanney-Criddle method, which implied that the coefficients should be adjusted to the Korean conditions. 2. The meteorological factors which showed hgih correlation with the measured PET were temperature, vapour pressure deficit, sunshine hours, solar radiation and pan-evaporation. Several multiple regression equations using meteorological factors were formulated to estimate PET. The equation with pan-evaporation (Eo) was the simplest but highly accurate. PET = 0.712 + 0.705Eo 3. The crop coefficient of Chinese cabbages (Kc), the ratio of the maximum evapotranspiration (ETm) to PET, ranged from 0.5 to 0.7 at early growth stage and from 0.9 to 1.2 at mid and late growth stages. The regression equation with respect to the growth progress degree (G), ranging from 0.0 at transplanting day to 1.0 at the harvesting day, were: $$Kc=0.598+0.959G-0.501G^2$$ for spring cabbages $$Kc=0.402+1.887G-1.432G^2$$ for autumn cabbages 4. The soil factor (Kf), the ratio of the actual evapotranspiration to the maximum evapotranspiration, showed 1.0 when the available soil water fraction (f) was higher than a threshold value (fp) and decreased linearly with decreasing f below fp. The relationships were: Kf=1.0 for $$f{\geq}fp$$ Kf=a+bf for f$$I{\leq}Esm$$ Es = Esm for I > Esm 6. The model for estimating actual evapotranspiration (ETa) was based on the water balance neglecting capillary rise as: ETa=PET. Kc. Kf+Es 7. The model for estimating relative yield (Y/Ym) was selected among the regression equations with the measured ETa as: Y/Ym=a+bln(ETa) The coefficients and b were 0.07 and 0.73 for spring Chinese cabbages and 0.37 and 0.66 for autumn Chinese cabbages, respectively. 8. The estimated ETa and Y/Ym were compared with the measured values to verify the model established above. The estimated ETa showed disparities within 0.29mm/day for spring Chinese cabbages and 0.19mm/day for autumn Chinese cabbages. The average deviation of the estimated relative yield were 0.14 and 0.09, respectively. 9. The deviations between the estimated values by the model and the actual values obtained from three cropping field experiments after the completion of the model calibration were within reasonable confidence range. Therefore, this model was validated to be used in practical purpose.

  • PDF

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

Clickstream Big Data Mining for Demographics based Digital Marketing (인구통계특성 기반 디지털 마케팅을 위한 클릭스트림 빅데이터 마이닝)

  • Park, Jiae;Cho, Yoonho
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.143-163
    • /
    • 2016
  • The demographics of Internet users are the most basic and important sources for target marketing or personalized advertisements on the digital marketing channels which include email, mobile, and social media. However, it gradually has become difficult to collect the demographics of Internet users because their activities are anonymous in many cases. Although the marketing department is able to get the demographics using online or offline surveys, these approaches are very expensive, long processes, and likely to include false statements. Clickstream data is the recording an Internet user leaves behind while visiting websites. As the user clicks anywhere in the webpage, the activity is logged in semi-structured website log files. Such data allows us to see what pages users visited, how long they stayed there, how often they visited, when they usually visited, which site they prefer, what keywords they used to find the site, whether they purchased any, and so forth. For such a reason, some researchers tried to guess the demographics of Internet users by using their clickstream data. They derived various independent variables likely to be correlated to the demographics. The variables include search keyword, frequency and intensity for time, day and month, variety of websites visited, text information for web pages visited, etc. The demographic attributes to predict are also diverse according to the paper, and cover gender, age, job, location, income, education, marital status, presence of children. A variety of data mining methods, such as LSA, SVM, decision tree, neural network, logistic regression, and k-nearest neighbors, were used for prediction model building. However, this research has not yet identified which data mining method is appropriate to predict each demographic variable. Moreover, it is required to review independent variables studied so far and combine them as needed, and evaluate them for building the best prediction model. The objective of this study is to choose clickstream attributes mostly likely to be correlated to the demographics from the results of previous research, and then to identify which data mining method is fitting to predict each demographic attribute. Among the demographic attributes, this paper focus on predicting gender, age, marital status, residence, and job. And from the results of previous research, 64 clickstream attributes are applied to predict the demographic attributes. The overall process of predictive model building is compose of 4 steps. In the first step, we create user profiles which include 64 clickstream attributes and 5 demographic attributes. The second step performs the dimension reduction of clickstream variables to solve the curse of dimensionality and overfitting problem. We utilize three approaches which are based on decision tree, PCA, and cluster analysis. We build alternative predictive models for each demographic variable in the third step. SVM, neural network, and logistic regression are used for modeling. The last step evaluates the alternative models in view of model accuracy and selects the best model. For the experiments, we used clickstream data which represents 5 demographics and 16,962,705 online activities for 5,000 Internet users. IBM SPSS Modeler 17.0 was used for our prediction process, and the 5-fold cross validation was conducted to enhance the reliability of our experiments. As the experimental results, we can verify that there are a specific data mining method well-suited for each demographic variable. For example, age prediction is best performed when using the decision tree based dimension reduction and neural network whereas the prediction of gender and marital status is the most accurate by applying SVM without dimension reduction. We conclude that the online behaviors of the Internet users, captured from the clickstream data analysis, could be well used to predict their demographics, thereby being utilized to the digital marketing.

Cooperative Sales Promotion in Manufacturer-Retailer Channel under Unplanned Buying Potential (비계획구매를 고려한 제조업체와 유통업체의 판매촉진 비용 분담)

  • Kim, Hyun Sik
    • Journal of Distribution Research
    • /
    • v.17 no.4
    • /
    • pp.29-53
    • /
    • 2012
  • As so many marketers get to use diverse sales promotion methods, manufacturer and retailer in a channel often use them too. In this context, diverse issues on sales promotion management arise. One of them is the issue of unplanned buying. Consumers' unplanned buying is clearly better off for the retailer but not for manufacturer. This asymmetric influence of unplanned buying should be dealt with prudently because of its possibility of provocation of channel conflict. However, there have been scarce studies on the sales promotion management strategy considering the unplanned buying and its asymmetric effect on retailer and manufacturer. In this paper, we try to find a better way for a manufacturer in a channel to promote performance through the retailer's sales promotion efforts when there is potential of unplanned buying effect. We investigate via game-theoretic modeling what is the optimal cost sharing level between the manufacturer and retailer when there is unplanned buying effect. We investigated following issues about the topic as follows: (1) What structure of cost sharing mechanism should the manufacturer and retailer in a channel choose when unplanned buying effect is strong (or weak)? (2) How much payoff could the manufacturer and retailer in a channel get when unplanned buying effect is strong (or weak)? We focus on the impact of unplanned buying effect on the optimal cost sharing mechanism for sales promotions between a manufacturer and a retailer in a same channel. So we consider two players in the game, a manufacturer and a retailer who are interacting in a same distribution channel. The model is of complete information game type. In the model, the manufacturer is the Stackelberg leader and the retailer is the follower. Variables in the model are as following table. Manufacturer's objective function in the basic game is as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. And retailer's is as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+p_u(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. The model is of four stages in two periods. Stages of the game are as follows. (Stage 1) Manufacturer sets wholesale price of the first period($w_1$) and cost sharing level of channel sales promotion(${\Psi}$). (Stage 2) Retailer sets retail price of the focal brand($p_1$), the unplanned buying item($p_u$), and sales promotion level(L). (Stage 3) Manufacturer sets wholesale price of the second period($w_2$). (Stage 4) Retailer sets retail price of the second period($p_2$). Since the model is a kind of dynamic games, we try to find a subgame perfect equilibrium to derive some theoretical and managerial implications. In order to obtain the subgame perfect equilibrium, we use the backward induction method. In using backward induction approach, we solve the problems backward from stage 4 to stage 1. By completely knowing follower's optimal reaction to the leader's potential actions, we can fold the game tree backward. Equilibrium of each variable in the basic game is as following table. We conducted more analysis of additional game about diverse cost level of manufacturer. Manufacturer's objective function in the additional game is same with that of the basic game as follows: ${\Pi}={\Pi}_1+{\Pi}_2$, where, ${\Pi}_1=w_1(1+L-p_1)-{\psi}^2$, ${\Pi}_2=w_2(1-{\epsilon}L-p_2)$. But retailer's objective function is different from that of the basic game as follows: ${\pi}={\pi}_1+{\pi}_2$, where, ${\pi}_1=(p_1-w_1)(1+L-p_1)-L(L-{\psi})+(p_u-c)(b+L-p_u)$, ${\pi}_2=(p_2-w_2)(1-{\epsilon}L-p_2)$. Equilibrium of each variable in this additional game is as following table. Major findings of the current study are as follows: (1) As the unplanned buying effect gets stronger, manufacturer and retailer had better increase the cost for sales promotion. (2) As the unplanned buying effect gets stronger, manufacturer had better decrease the cost sharing portion of total cost for sales promotion. (3) Manufacturer's profit is increasing function of the unplanned buying effect. (4) All results of (1),(2),(3) are alleviated by the increase of retailer's procurement cost to acquire unplanned buying items. The authors discuss the implications of those results for the marketers in manufacturers or retailers. The current study firstly suggests some managerial implications for the manufacturer how to share the sales promotion cost with the retailer in a channel to the high or low level of the consumers' unplanned buying potential.

  • PDF

DC Resistivity method to image the underground structure beneath river or lake bottom (하저 지반특성 규명을 위한 전기비저항 탐사)

  • Kim Jung-Ho;Yi Myeong-Jong;Song Yoonho;Cho Seong-Jun;Lee Seong-Kon;Son Jeongsul
    • 한국지구물리탐사학회:학술대회논문집
    • /
    • 2002.09a
    • /
    • pp.139-162
    • /
    • 2002
  • Since weak zones or geological lineaments are likely to be eroded, weak zones may develop beneath rivers, and a careful evaluation of ground condition is important to construct structures passing through a river. Dc resistivity surveys, however, have seldomly applied to the investigation of water-covered area, possibly because of difficulties in data aquisition and interpretation. The data aquisition having high quality may be the most important factor, and is more difficult than that in land survey, due to the water layer overlying the underground structure to be imaged. Through the numerical modeling and the analysis of case histories, we studied the method of resistivity survey at the water-covered area, starting from the characteristics of measured data, via data acquisition method, to the interpretation method. We unfolded our discussion according to the installed locations of electrodes, ie., floating them on the water surface, and installing at the water bottom, since the methods of data acquisition and interpretation vary depending on the electrode location. Through this study, we could confirm that the dc resistivity method can provide the fairly reasonable subsurface images. It was also shown that installing electrodes at the water bottom can give the subsurface image with much higher resolution than floating them on the water surface. Since the data acquired at the water-covered area have much lower sensitivity to the underground structure than those at the land, and can be contaminated by the higher noise, such as streaming potential, it would be very important to select the acquisition method and electrode array being able to provide the higher signal-to-noise ratio data as well as the high resolving power. The method installing electrodes at the water bottom is suitable to the detailed survey because of much higher resolving power, whereas the method floating them, especially streamer dc resistivity survey, is to the reconnaissance survey owing of very high speed of field work.

  • PDF