• 제목/요약/키워드: Numerical schemes

검색결과 757건 처리시간 0.034초

Cluster-based P2P scheme considering node mobility in MANET (MANET에서 장치의 이동성을 고려한 클러스터 기반 P2P 알고리즘)

  • Wu, Hyuk;Lee, Dong-Jun
    • Journal of Advanced Navigation Technology
    • /
    • 제15권6호
    • /
    • pp.1015-1024
    • /
    • 2011
  • Mobile P2P protocols in ad-hoc networks have gained large attention recently. Although there has been much research on P2P algorithms for wired networks, existing P2P protocols are not suitable for mobile ad-hoc networks because they do not consider mobility of peers. This study proposes a new cluster-based P2P protocol for ad hoc networks which utilizes peer mobility. In typical cluster-based P2P algorithms, each cluster has a super peer and other peers of the cluster register their file list to the super peer. High mobility peers would cause a lot of file list registration traffic because they hand-off between clusters frequently. In the proposed scheme, while peers with low mobility behave in the same way as the peers of the typical cluster-based P2P schemes, peers with high mobility behave differently. They inform their entrance to the cluster region to the super peer but they do not register their file list to the super peer. When a peer wishes to find a file, it first searches the registered file list of the super peer and if fails, query message is broadcasted. We perform mathematical modeling, analysis and optimization of the proposed scheme regarding P2P traffic and associated routing traffic. Numerical results show that the proposed scheme performs much better than or similar to the typical cluster-based P2P scheme and flooding based Gnutella.

Numerical studies of information about elastic parameter sets in non-linear elastic wavefield inversion schemes (비선형 탄성파 파동장 역산 방법에서 탄성파 변수 세트에 관한 정보의 수치적 연구)

  • Sakai, Akio
    • Geophysics and Geophysical Exploration
    • /
    • 제10권1호
    • /
    • pp.1-18
    • /
    • 2007
  • Non-linear elastic wavefield inversion is a powerful method for estimating elastic parameters for physical constraints that determine subsurface rock and properties. Here, I introduce six elastic-wave velocity models by reconstructing elastic-wave velocity variations from real data and a 2D elastic-wave velocity model. Reflection seismic data information is often decoupled into short and long wavelength components. The local search method has difficulty in estimating the longer wavelength velocity if the starting model is far from the true model, and source frequencies are then changed from lower to higher bands (as in the 'frequency-cascade scheme') to estimate model elastic parameters. Elastic parameters are inverted at each inversion step ('simultaneous mode') with a starting model of linear P- and S-wave velocity trends with depth. Elastic parameters are also derived by inversion in three other modes - using a P- and S-wave velocity basis $('V_P\;V_S\;mode')$; P-impedance and Poisson's ratio basis $('I_P\;Poisson\;mode')$; and P- and S-impedance $('I_P\;I_S\;mode')$. Density values are updated at each elastic inversion step under three assumptions in each mode. By evaluating the accuracy of the inversion for each parameter set for elastic models, it can be concluded that there is no specific difference between the inversion results for the $V_P\;V_S$ mode and the $I_P$ Poisson mode. The same conclusion is expected for the $I_P\;I_S$ mode, too. This gives us a sound basis for full wavelength elastic wavefield inversion.

Modeling of Magnetotelluric Data Based on Finite Element Method: Calculation of Auxiliary Fields (유한요소법을 이용한 MT 탐사 자료의 모델링: 보조장 계산의 고찰)

  • Nam, Myung-Jin;Han, Nu-Ree;Kim, Hee-Joon;Song, Yoon-Ho
    • Geophysics and Geophysical Exploration
    • /
    • 제14권2호
    • /
    • pp.164-175
    • /
    • 2011
  • Using natural electromagnetic (EM) fields at low frequencies, magnetotelluric (MT) surveys can investigate conductivity structures of the deep subsurface and thus are used to explore geothermal energy resources and investigate proper sites for not only geological $CO_2$ sequestration but also enhanced geothermal system (EGS). Moreover, marine MT data can be used for better interpretation of marine controlled-source EM data. In the interpretation of MT data, MT modeling schemes are important. This study improves a three dimensional (3D) MT modeling algorithm which uses edge finite elements. The algorithm computes magnetic fields by solving an integral form of Faraday's law of induction based on a finite difference (FD) strategy. However, the FD strategy limits the algorithm in computing vertical magnetic fields for a topographic model. The improved algorithm solves the differential form of Faraday's law of induction by making derivatives of electric fields, which are represented as a sum of basis functions multiplied by corresponding weightings. In numerical tests, vertical magnetic fields for topographic models using the improved algorithm overcome the limitation of the old algorithm. This study recomputes induction vectors and tippers for a 3D hill and valley model which were used for computation of the responses using the old algorithm.

QoS and Multi-Class Service Provisioning with Distributed Call Admission Control in Wireless ATM Networks (무선 ATM망에서 QoS와 다중 서비스를 지원하는 분산된 호 수락 제어 알고리즘과 성능 분석)

  • Jeong, Da-Ip;Jo, Yeong-Jong
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • 제37권2호
    • /
    • pp.45-53
    • /
    • 2000
  • In wireless environment, due to the limited capacity of radio channels it is not easy to guarantee QoS provisioning to mobile users. Therefore, one of the key problems to support broadband multimedia multi-services in wireless ATM networks is to study an effective call admission control(CAC). The purpose of this paper is to propose a distributed CAC scheme that guarantees multi QoS and multi-class service. Control parameters of the proposed scheme are QoS threshold and channel overload probability. With these parameter control, we show that the scheme can guarantee the requested QoS to both new and handover calls. In the scheme, channels are allocated dynamically, and QoS measurements are made in a distributed manner. We show that by providing variable data rate to calls it can effectively prohibit the QoS degradation even if there are severe fluctuations of network traffic. We compare the proposed CAC scheme to the well-known schemes such as guard band call admission control scheme. Through numerical examples and simulations, the proposed scheme is shown to improve the performance by lowering the probability of handover call dropping

  • PDF

Design of User Clustering and Robust Beam in 5G MIMO-NOMA System Multicell (5G MIMO-NOMA 시스템 멀티 셀에서의 사용자 클러스터링 및 강력한 빔 설계)

  • Kim, Jeong-Su;Lee, Moon-Ho
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • 제18권1호
    • /
    • pp.59-69
    • /
    • 2018
  • In this paper, we present a robust beamforming design to tackle the weighted sum-rate maximization (WSRM) problem in a multicell multiple-input multiple-output (MIMO) - non-orthogonal multipleaccess (NOMA) downlink system for 5G wireless communications. This work consider the imperfectchannel state information (CSI) at the base station (BS) by adding uncertainties to channel estimation matrices as the worst-case model i.e., singular value uncertainty model (SVUM). With this observation, the WSRM problem is formulated subject to the transmit power constraints at the BS. The objective problem is known as on-deterministic polynomial (NP) problem which is difficult to solve. We propose an robust beam forming design which establishes on majorization minimization (MM) technique to find the optimal transmit beam forming matrix, as well as efficiently solve the objective problem. In addition, we also propose a joint user clustering and power allocation (JUCPA) algorithm in which the best user pair is selected as a cluster to attain a higher sum-rate. Extensive numerical results are provided to show that the proposed robust beamforming design together with the proposed JUCPA algorithm significantly increases the performance in term of sum-rate as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

Spectral Efficiency of MC-CDMA (MC-CDMA 방식의 주파수 효율)

  • Han Hee-Goo;Oh Seong-Keun
    • Journal of the Institute of Electronics Engineers of Korea TC
    • /
    • 제43권3호
    • /
    • pp.39-48
    • /
    • 2006
  • In this paper, we analyze the spectral efficiency of multicarrier-code division multiple access (MC-CDMA) scheme. First, we derive a generalized formula for the spectral efficiency according to the number of subcarriers involved in, code division multiplexing and the number of codes used (i.e., loading factor), under a given set of channel coefficients. Also, we derive a generalized formula for spectral efficiency of various reduced-complexity systems that divide the full sets of subcarriers into several groups of subcarriers for code division multiplexing. Then, through these derivations, we establish an inter-relationship between the frequency selectivity and diversity order according to the number of multipaths. From the results, we choose the smallest code length while maximizing the diversity effect, provide an optimum subcarrier allocation strategy, and finally suggest a system structure for capacity-maximizing under the smallest code length. Through numerical analyses under simulated environments, we analyze the properties of spectral efficiency of various systems with reduced complexity and choose a major contributing factors to system design and a better system design methodology. Finally, we compare the spectral efficiency of the MC-CDMA scheme and orthogonal frequency division multiplexing (OFDM) scheme to make a relationship between both schemes.

Simulation Techniques for Mid-Frequency Vibro-Acoustics Virtual Tools For Real Problems

  • Desmet, Wim;Pluymers, Bert;Atak, Onur;Bergen, Bart;Deckers, Elke;Huijssen, Koos;Van Genechten, Bert;Vergote, Karel;Vandepitte, Dirk
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 한국소음진동공학회 2010년도 춘계학술대회 논문집
    • /
    • pp.49-49
    • /
    • 2010
  • The most commonly used numerical modelling techniques for acoustics and vibration are based on element based techniques, such as the nite element and boundary element method. Due to the huge computational eorts involved, the use of these deterministic techniques is practically restricted to low-frequency applications. For high-frequency modelling, probabilistic techniques such as SEA are well established. However, there is still a wide mid-frequency range, for which no adequate and mature prediction techniques are available. In this frequency range, the computational eorts of conventional element based techniques become prohibitively large, while the basic assumptions of the probabilistic techniques are not yet valid. In recent years, a vast amount of research has been initiated in a quest for an adequate solution for the current midfrequency problem. One family of research methods focuses on novel deterministic approaches with an enhanced convergence rate and computational eciency compared to the conventional element based methods in order to shift the practical frequency limitation towards the mid-frequency range. Amongst those techniques, a wave based prediction technique using an indirect Tretz approach is being developed at the K.U.Leuven - Noise and Vibration Research group. This paper starts with an outline of the major features of the mid-frequency modelling challenge and provides a short overview of the current research activities in response to this challenge. Next, the basic concepts of the wave based technique and its hybrid coupling with nite element schemes are described. Various validations on two- and threedimensional acoustic, elastic, poro-elastic and vibro-acoustic examples are given to illustrate the potential of the method and its benecial performance as compared to conventional element based methods. A closing part shares some views on the open issues and future research directions.

  • PDF

Investigating the Impact of Random and Systematic Errors on GPS Precise Point Positioning Ambiguity Resolution

  • Han, Joong-Hee;Liu, Zhizhao;Kwon, Jay Hyoun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • 제32권3호
    • /
    • pp.233-244
    • /
    • 2014
  • Precise Point Positioning (PPP) is an increasingly recognized precisely the GPS/GNSS positioning technique. In order to improve the accuracy of PPP, the error sources in PPP measurements should be reduced as much as possible and the ambiguities should be correctly resolved. The correct ambiguity resolution requires a careful control of residual errors that are normally categorized into random and systematic errors. To understand effects from two categorized errors on the PPP ambiguity resolution, those two GPS datasets are simulated by generating in locations in South Korea (denoted as SUWN) and Hong Kong (PolyU). Both simulation cases are studied for each dataset; the first case is that all the satellites are affected by systematic and random errors, and the second case is that only a few satellites are affected. In the first case with random errors only, when the magnitude of random errors is increased, L1 ambiguities have a much higher chance to be incorrectly fixed. However, the size of ambiguity error is not exactly proportional to the magnitude of random error. Satellite geometry has more impacts on the L1 ambiguity resolution than the magnitude of random errors. In the first case when all the satellites have both random and systematic errors, the accuracy of fixed ambiguities is considerably affected by the systematic error. A pseudorange systematic error of 5 cm is the much more detrimental to ambiguity resolutions than carrier phase systematic error of 2 mm. In the $2^{nd}$ case when only a portion of satellites have systematic and random errors, the L1 ambiguity resolution in PPP can be still corrected. The number of allowable satellites varies from stations to stations, depending on the geometry of satellites. Through extensive simulation tests under different schemes, this paper sheds light on how the PPP ambiguity resolution (more precisely L1 ambiguity resolution) is affected by the characteristics of the residual errors in PPP observations. The numerical examples recall the PPP data analysts that how accurate the error correction models must achieve in order to get all the ambiguities resolved correctly.

Seismic Traveltime Tomography in Inhomogeneous Tilted Transversely Isotropic Media (불균질 횡등방성 매질에서의 탄성파 주시토모그래피)

  • Jeong, Chang-Ho;Suh, Jung-Hee
    • Geophysics and Geophysical Exploration
    • /
    • 제10권4호
    • /
    • pp.229-240
    • /
    • 2007
  • In this study, seismic anisotropic tomography algorithm was developed for imaging the seismic velocity anisotropy of the subsurface. This algorithm includes several inversion schemes in order to make the inversion process stable and robust. First of all, the set of the inversion parameters is limited to one slowness, two ratios of slowness and one direction of the anisotropy symmetric axis. The ranges of the inversion parameters are localized by the pseudobeta transform to obtain the reasonable inversion results and the inversion constraints are controlled efficiently by ACB(Active Constraint Balancing) method. Especially, the inversion using the Fresnel volume is applied to the anisotropic tomography and it can make the anisotropic tomography more stable than ray tomography as it widens the propagation angle coverage. The algorithm of anisotropic tomography is verified through the numerical experiments. And, it is applied to the real field data measured at limestone region and the results are discussed with the drill log and geological survey data. The anisotropic tomography algorithm will be able to provide the useful tool to evaluate and understand the geological structure of the subsurface more reasonably with the anisotropic characteristics.

Design and Analsis of a high speed switching system with two priority (두개의 우선 순위를 가지는 고속 스윗칭 시스템의 설계 및 성능 분석)

  • Hong, Yo-Hun;Choe, Jin-Sik;Jeon, Mun-Seok
    • The KIPS Transactions:PartC
    • /
    • 제8C권6호
    • /
    • pp.793-805
    • /
    • 2001
  • In the recent priority system, high-priority packet will be served first and low-priority packet will be served when there isn\`t any high-priority packet in the system. By the way, even high-priority packet can be blocked by HOL (Head of Line) contention in the input queueing System. Therefore, the whole switching performance can be improved by serving low-priority packet even though high-priority packet is blocked. In this paper, we study the performance of preemptive priority in an input queueing switch for high speed switch system. The analysis of this switching system is taken into account of the influence of priority scheduling and the window scheme for head-of-line contention. We derive queue length distribution, delay and maximum throughput for the switching system based on these control schemes. Because of the service dependencies between inputs, an exact analysis of this switching system is intractable. Consequently, we provide an approximate analysis based on some independence assumption and the flow conservation rule. We use an equivalent queueing system to estimate the service capability seen by each input. In case of the preemptive priority policy without considering a window scheme, we extend the approximation technique used by Chen and Guerin [1] to obtain more accurate results. Moreover, we also propose newly a window scheme that is appropriate for the preemptive priority switching system in view of implementation and operation. It can improve the total system throughput and delay performance of low priority packets. We also analyze this window scheme using an equivalent queueing system and compare the performance results with that without the window scheme. Numerical results are compared with simulations.

  • PDF