• Title/Summary/Keyword: simulation function

Search Result 5,659, Processing Time 0.03 seconds

Non-Parametric Low-Flow Frequency Analysis Using RCPs Scenario Data : A Case Study of the Gwangdong Storage Reservoir, Korea (RCPs 시나리오 자료를 이용한 비매개변수적 갈수빈도 해석: 광동댐 유역을 중심으로)

  • Yoon, Sun Kwon;Cho, Jae Pil;Moon, Young Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.34 no.4
    • /
    • pp.1125-1138
    • /
    • 2014
  • In this study, we applied an advanced non-parametric low-flow frequency analysis using boundary kernel by Representative Concentration Pathways (RCPs) climate change scenarios through Arc-SWAT long-term runoff model simulation at the Gwangdong storage reservoir located in Taeback, Gangwondo. The results show that drought frequency under RCPs was expected to increase due to reduced runoff during the near future, and the variation of low-flow time series was appeared greatly under RCP8.5 scenario, respectively. The result from drought frequency of Median flow in the near future (2030s) compared historic period, the case of 30-year low-flow frequency was increased (the RCP4.5 shows +22.4% and the RCP8.5 shows +40.4%), but in the distant future (2080s) expected increase of drought frequency due to the reduction of low-flow (under RCP4.5: -4.7% and RCP8.5: -52.9%), respectively. In case of Quantile 25% flow time series data also expected that the severe drought frequency will be increased in the distant future by reducing low-flow (the RCP4.5 shows -20.8% to -60.0% and the RCP8.5 shows -30.4% to -96.0%). This non-parametric low-flow frequency analysis results according to the RCPs scenarios have expected to consider to take advantage of as a basis data for water resources management and countermeasures of climate change in the mid-watershed over the Korean Peninsula.

Bayesian parameter estimation of Clark unit hydrograph using multiple rainfall-runoff data (다중 강우유출자료를 이용한 Clark 단위도의 Bayesian 매개변수 추정)

  • Kim, Jin-Young;Kwon, Duk-Soon;Bae, Deg-Hyo;Kwon, Hyun-Han
    • Journal of Korea Water Resources Association
    • /
    • v.53 no.5
    • /
    • pp.383-393
    • /
    • 2020
  • The main objective of this study is to provide a robust model for estimating parameters of the Clark unit hydrograph (UH) using the observed rainfall-runoff data in the Soyangang dam basin. In general, HEC-1 and HEC-HMS models, developed by the Hydrologic Engineering Center, have been widely used to optimize the parameters in Korea. However, these models are heavily reliant on the objective function and sample size during the optimization process. Moreover, the optimization process is carried out on the basis of single rainfall-runoff data, and the process is repeated for other events. Their averaged values over different parameter sets are usually used for practical purposes, leading to difficulties in the accurate simulation of discharge. In this sense, this paper proposed a hierarchical Bayesian model for estimating parameters of the Clark UH model. The proposed model clearly showed better performance in terms of Bayesian inference criterion (BIC). Furthermore, the result of this study reveals that the proposed model can also be applied to different hydrologic fields such as dam design and design flood estimation, including parameter estimation for the probable maximum flood (PMF).

Design of Adaptive DCF algorithm for TCP Performance Enhancement in IEEE 802.11 based Mobile Ad-hoc Networks (IEEE 802.11 기반 이동 ad-hoc 망에서 TCP 성능 향상을 위한 적응적 DCF 알고리즘 설계)

  • Kim, Han-Jib;Lee, Gi-Ra;Lee, Jae-Yong;Kim, Byung-Chul
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.43 no.10 s.352
    • /
    • pp.79-89
    • /
    • 2006
  • TCP is the most widely used transport protocol in Internet applications that guarantees a reliable data transfer. But, in the wireless multi-hop networks, TCP performance is degraded because it is designed for wired networks. The main reasons of TCP performance degradation are contention for wireless medium at the MAC layer, hidden terminal problem, exposed terminal problem, packet losses in the link layer, unfairness problem, reordering problem caused by path disconnection, bandwidth waste caused by exponential backoff of retransmission timer due to node's mobility and so on. Specially, in the mobile ad-hoc networks, discrepancy between a station's transmission range and interference range produces hidden terminal problem that decreases TCP performance greatly by limiting simultaneous transmission at a time. In this paper, we propose a new MAC algorithm for mobile ad-hoc networks to solve the problem that a node can not transmit and just increase CW by hidden terminal. In the IEEE 802.11 MAC DCF, a node increases CW exponentially when it fails to transmit, but the proposed algorithm, changes CW adaptively according to the reason of failure so we get a TCP performance enhancement. We show by ns-2 simulation that the proposed algorithm enhances the TCP performance by fairly distributing the transmission opportunity to the failed nodes by hidden terminal problems.

A Deblurring Algorithm Combined with Edge Directional Color Demosaicing for Reducing Interpolation Artifacts (컬러 보간 에러 감소를 위한 에지 방향성 컬러 보간 방법과 결합된 디블러링 알고리즘)

  • Yoo, Du Sic;Song, Ki Sun;Kang, Moon Gi
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.7
    • /
    • pp.205-215
    • /
    • 2013
  • In digital imaging system, Bayer pattern is widely used and the observed image is degraded by optical blur during image acquisition process. Generally, demosaicing and deblurring process are separately performed in order to convert a blurred Bayer image to a high resolution color image. However, the demosaicing process often generates visible artifacts such as zipper effect and Moire artifacts when performing interpolation across edge direction in Bayer pattern image. These artifacts are emphasized by the deblurring process. In order to solve this problem, this paper proposes a deblurring algorithm combined with edge directional color demosaicing method. The proposed method is consisted of interpolation step and region classification step. Interpolation and deblurring are simultaneously performed according to horizontal and vertical directions, respectively during the interpolation step. In the region classification step, characteristics of local regions are determined at each pixel position and the directionally obtained values are region adaptively fused. Also, the proposed method uses blur model based on wave optics and deblurring filter is calculated by using estimated characteristics of local regions. The simulation results show that the proposed deblurring algorithm prevents the boosting of artifacts and outperforms conventional approaches in both objective and subjective terms.

Gameplay Experience as A Problem Solving - Towards The New Rule Spaces - (문제해결로서의 게임플레이 경험 - 새로운 법칙공간을 중심으로 -)

  • Song, Seung-Keun
    • Journal of Korea Game Society
    • /
    • v.9 no.5
    • /
    • pp.25-41
    • /
    • 2009
  • The objective of this study is to develop an analytic framework to code systematically the gamer's behaviour in MMO(Massively Multi-player Online) gameplay experience, to explore their gameplay as a problem solving procedure empirically. Previous studies about model human processor, content based protocol, and procedure based protocol are reviewed in order to build the outline of the analytic framework related to MMO gameplay. The specific gameplay actions and contents were derived by using concurrent protocol analysis method through the empirical experiment executed in MMORPG gameplay. Consequently, gameplay are divided into six actions : kinematics, perception, function, representation, simulation, and rule (heuristics, following, and transcedence). The analytic framework suitable for MMO gameplay was built. As a result of this study, we found three rule spaces in the problem solving domain of gameplay that are an heuristics, a following of the rule, and a transcendence of the rule. 'Heuristics' denotes the rule action that discovers the rule of game through trial-and-error. 'Following' indicates the rule action that follows the rule of game embedded in game by game designers. 'Transcendence' presents the rule action that transcends that. The new discovered rule spaces where 'Following' and 'Transcendence' actions occur and the gameplay pattern in them is provided with the key basis to determine the level design elements of MMO game, such as terrain feature, monster attribute, item, and skill et cetera. Therefore, this study is concludes with key implications to support game design to improve the quality of MMO game product.

  • PDF

Partial Path Selection Method in Each Subregion for Routing Path Optimization in SEF Based Sensor Networks (통계적 여과 기법 기반 센서 네트워크에서 라우팅 경로 최적화를 위한 영역별 부분 경로 선택 방법)

  • Park, Hyuk;Cho, Tae-Ho
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.1
    • /
    • pp.108-113
    • /
    • 2012
  • Routing paths are mightily important for the network security in WSNs. To maintain such routing paths, sustained path re-selection and path management are needed. Region segmentation based path selection method (RSPSM) provides a path selection method that a sensor network is divided into several subregions, so that the regional path selection and path management are available. Therefore, RSPSM can reduce energy consumption when the path re-selection process is executed. However, it is hard to guarantee optimized secure routing path at all times since the information using the path re-selection process is limited in scope. In this paper, we propose partial path selection method in each subregion using preselected partial paths made by RSPSM for routing path optimization in SEF based sensor networks. In the proposed method, the base station collects the information of the all partial paths from every subregion and then, evaluates all the candidates that can be the optimized routing path for each node using a evaluation function. After the evaluation process is done, the result is sent to each super DN using the global routing path information (GPI) message. Thus, each super DN provides the optimized secure routing paths using the GPI. We show the effectiveness of the proposed method via the simulation results. We expect that our method can be useful for the improvement of RSPSM.

Development of Smart Multi-function Ground Resistivity Measuring Device using Arduino in Wind Farm (풍력 발전단지내 아두이노를 활용한 스마트 다기능 대지 고유 저항 측정 장치 개발)

  • Kim, Hong-Yong;Yoon, Dong-Gi;Shin, Seung-Jung
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.65-71
    • /
    • 2019
  • Conventional methods of measuring ground resistance and ground resistance field measurement are used to measure voltage drop according to the resistance value of the site by applying current by installing a constant interval of measurement electrode. If the stratified structure of the site site is unique, errors in boundary conditions will occur in the event of back acid and the analysis of the critical ground resistance in the ground design will show much difference from simulation. This study utilizes the Arduino module and smart ground measurement technology in the convergent information and communication environment to develop a reliable smart land resistance measuring device even if the top layer of land is unique, to analyze the land resistance and accumulate data to predict the change in the age of the land. Considering the topographical characteristics of the site, we propose a ground resistance measuring device and its method of measuring ground resistance so that the auxiliary electrode can be installed by correctly positioning the angle and distance in measuring ground resistance. Not only is ground resistance value obtained through electrodes installed to allow accurate ground resistance values to be selected, but it can also be used as a useful material for installing electrical facilities in similar areas. Moreover, by utilizing reliable data and analyzing the large sections of the site, a precise analysis of the site, which is important in ground design as well as construction cost, is expected to be used much in ground facility design such as potential rise.

Performance Analysis of a Packet Voice Multiplexer Using the Overload Control Strategy by Bit Dropping (Bit-dropping에 의한 Overload Control 방식을 채용한 Packet Voice Multiplexer의 성능 분석에 관한 연구)

  • 우준석;은종관
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.18 no.1
    • /
    • pp.110-122
    • /
    • 1993
  • When voice is transmitted through packet switching network, there needs a overload control, that is, a control for the congestion which lasts short periods and occurrs in local extents. In this thesis, we analyzed the performance of the statistical packet voice multiplexer using the overload control strategy by bit dropping. We assume that the voice is coded accordng to (4,2) embedded ADPCM and that the voice packet is generated and transmitted according to the procedures in the CCITT recomendation G. 764. For the performance analysis, we must model the superposed packet arrival process to the multiplexer as exactly as possible. It is well known that interarrival times of the packets are highly correlated and for this reason MMPP is more suited for the modelling in the viewpoint of accuracy. Hence the packet arrival process in modeled as MMPP and the matrix geometric method is used for the performance analysis. Performance analysis is similar to the MMPP IG II queueing system. But the overload control makes the service time distribution G dependent on system status or queue length in the multiplexer. Through the performance analysis we derived the probability generating function for the queue length and using this we derived the mean and standard deviation of the queue length and waiting time. The numerical results are verified through the simulation and the results show that the values embedded in the departure times and that in the arbitrary times are almost the same. Results also show bit dropping reduces the mean and the variation of the queue length and those of the waiting time.

  • PDF

Theoretical Analysis of Critical Chloride Content in (Non)Carbonated Concrete Based on Characteristics of Hydration of Cement (시멘트 수화 특성 및 탄산화를 고려한 콘크리트의 임계 염소이온량에 대한 해석 기법)

  • Yoon, In-Seok
    • Journal of the Korea Concrete Institute
    • /
    • v.19 no.3
    • /
    • pp.367-375
    • /
    • 2007
  • Critical chloride content for corrosion initiation is a crucial parameter in determining the durability and integrity of reinforced concrete structures, however, the value is still ambiguous. Most of the studies reporting critical threshold chloride content have involved the experimental measurement of the average amount of the total chloride content at arbitrary time. The majority of these researches have not dealt with this issue combined with carbonation of concrete, although carbonation can significantly impact on critical threshold chloride content. Furthermore, the studies have tried to define the critical chloride content within the scope of their experimental concrete mix proportion at arbitrary time. However, critical chloride content for corrosion initiation is known to be affected by a lot of factors including cement content, type of binder, chloride binding, concentration of hydroxyl ions, and so on. It is necessary to define the unified formulation to express the critical chloride content for various mix proportions of concrete. The purpose of this study is to establish an analytical formulation of the critical chloride content of concrete. In this formulation, affecting factors, such as mix proportion, environment, chemical evolution of pore solution with elapsed time, carbonation of concrete and so on are taken into account. Based on the Gouda's experimental results, critical chloride content is defined as a function of $[Cl^-]$ vs. $[OH^-]$ in pore solution. This is expressed as free chloride content with mass unit to consider time evolution of $[OH^-]$ content in pore solution using the numerical simulation programme of cementitious materials, HYMOSTRUC. The result was compared with other experimental studies and various codes. It is believed that the approach suggested in this study can provide a good solution to determine the reasonable critical chloride content with original source of chloride ions, for example, marine sand at initial time, and sea water penetration later on.

Bayesian Cognizance of RFID Tags (Bayes 풍의 RFID Tag 인식)

  • Park, Jin-Kyung;Ha, Jun;Choi, Cheon-Won
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • v.46 no.5
    • /
    • pp.70-77
    • /
    • 2009
  • In an RFID network consisting of a single reader and many tags, a framed and slotted ALOHA, which provides a number of slots for the tags to respond, was introduced for arbitrating a collision among tags' responses. In a framed and slotted ALOHA, the number of slots in each frame should be optimized to attain the maximal efficiency in tag cognizance. While such an optimization necessitates the knowledge about the number of tags, the reader hardly knows it. In this paper, we propose a tag cognizance scheme based on framed and slotted ALOHA, which is characterized by directly taking a Bayes action on the number of slots without estimating the number of tags separately. Specifically, a Bayes action is yielded by solving a decision problem which incorporates the prior distribution the number of tags, the observation on the number of slots in which no tag responds and the loss function reflecting the cognizance rate. Also, a Bayes action in each frame is supported by an evolution of prior distribution for the number of tags. From the simulation results, we observe that the pair of evolving prior distribution and Bayes action forms a robust scheme which attains a certain level of cognizance rate in spite of a high discrepancy between the Due and initially believed numbers of tags. Also, the proposed scheme is confirmed to be able to achieve higher cognizance completion probability than a scheme using classical estimate of the number of tags separately.