• 제목/요약/키워드: Loss Allocation

검색결과 247건 처리시간 0.021초

Transmitter Beamforming and Artificial Noise with Delayed Feedback: Secrecy Rate and Power Allocation

  • Yang, Yunchuan;Wang, Wenbo;Zhao, Hui;Zhao, Long
    • Journal of Communications and Networks
    • /
    • 제14권4호
    • /
    • pp.374-384
    • /
    • 2012
  • Utilizing artificial noise (AN) is a good means to guarantee security against eavesdropping in a multi-inputmulti-output system, where the AN is designed to lie in the null space of the legitimate receiver's channel direction information (CDI). However, imperfect CDI will lead to noise leakage at the legitimate receiver and cause significant loss in the achievable secrecy rate. In this paper, we consider a delayed feedback system, and investigate the impact of delayed CDI on security by using a transmit beamforming and AN scheme. By exploiting the Gauss-Markov fading spectrum to model the feedback delay, we derive a closed-form expression of the upper bound on the secrecy rate loss, where $N_t$ = 2. For a moderate number of antennas where $N_t$ > 2, two special cases, based on the first-order statistics of the noise leakage and large number theory, are explored to approximate the respective upper bounds. In addition, to maintain a constant signal-to-interferenceplus-noise ratio degradation, we analyze the corresponding delay constraint. Furthermore, based on the obtained closed-form expression of the lower bound on the achievable secrecy rate, we investigate an optimal power allocation strategy between the information signal and the AN. The analytical and numerical results obtained based on first-order statistics can be regarded as a good approximation of the capacity that can be achieved at the legitimate receiver with a certain number of antennas, $N_t$. In addition, for a given delay, we show that optimal power allocation is not sensitive to the number of antennas in a high signal-to-noise ratio regime. The simulation results further indicate that the achievable secrecy rate with optimal power allocation can be improved significantly as compared to that with fixed power allocation. In addition, as the delay increases, the ratio of power allocated to the AN should be decreased to reduce the secrecy rate degradation.

H-ARQ가 적용된 OFDMA 기반 연접할당자원에 대한 전송률 향상을 위한 채널 할당 방법 (Channel Allocation Method for OFDMA Based Contiguous Resources Units with H-ARQ to Enhance Channel Throughput)

  • 김상현;정영호
    • 한국항행학회논문지
    • /
    • 제15권3호
    • /
    • pp.386-391
    • /
    • 2011
  • 인접한 OFDMA 부반송파를 그룹으로 자원을 할당하는 연접할당자원 전송은 IEEE 802.16e/m을 포함한 최근의 다양한 이동통신시스템에서 사용되고 있다. 한 사용자에게 2개 이상의 서로 다른 신호 대 잡음 비를 갖는 연접할당자원이 스케줄러에 의해 할당되고, 해당 사용자는 할당된 채널을 이용하여 복수의 독립된 패킷 스트림을 H-ARQ 전송하는 경우, 재전송 패킷과 신규 전송 패킷을 각각 어떤 채널에 할당하는가에 따라 전송률이 달라진다. 본 논문에서는 상기 문제에 대한 최적 채널 할당 방식을 살펴보고, 최적 할당 방식의 복잡도를 낮출 수 있는 준 최적 할당 방법을 제시하였다. 또한 이에 대한 실험적 성능 분석을 통해 초기전송에 우선하여 우수한 채널을 할당하는 준 최적 방식을 적용할 경우 최적 할당 방식의 복잡도를 대폭 낮추면서도 최적 할당 방식에 근접하는 성능을 얻을 수 있음을 보였다.

LDBAS: Location-aware Data Block Allocation Strategy for HDFS-based Applications in the Cloud

  • Xu, Hua;Liu, Weiqing;Shu, Guansheng;Li, Jing
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제12권1호
    • /
    • pp.204-226
    • /
    • 2018
  • Big data processing applications have been migrated into cloud gradually, due to the advantages of cloud computing. Hadoop Distributed File System (HDFS) is one of the fundamental support systems for big data processing on MapReduce-like frameworks, such as Hadoop and Spark. Since HDFS is not aware of the co-location of virtual machines in the cloud, the default scheme of block allocation in HDFS does not fit well in the cloud environments behaving in two aspects: data reliability loss and performance degradation. In this paper, we present a novel location-aware data block allocation strategy (LDBAS). LDBAS jointly optimizes data reliability and performance for upper-layer applications by allocating data blocks according to the locations and different processing capacities of virtual nodes in the cloud. We apply LDBAS to two stages of data allocation of HDFS in the cloud (the initial data allocation and data recovery), and design the corresponding algorithms. Finally, we implement LDBAS into an actual Hadoop cluster and evaluate the performance with the benchmark suite BigDataBench. The experimental results show that LDBAS can guarantee the designed data reliability while reducing the job execution time of the I/O-intensive applications in Hadoop by 8.9% on average and up to 11.2% compared with the original Hadoop in the cloud.

양방향 복호전달 릴레이를 사용하는 레거시 OFDMA 시스템에서의 자원 할당 기법 (Resource Allocation Schemes for Legacy OFDMA Systems with Two-Way DF Relay)

  • 서종필;한철희;박성호;정재학
    • 한국통신학회논문지
    • /
    • 제39A권10호
    • /
    • pp.593-600
    • /
    • 2014
  • OFDMA (orthogonal frequency division multiple access) 시스템은 주파수 선택적 페이딩 문제를 효율적으로 해결하며 최적의 부반송파 및 송신 전력을 할당함으로써 보다 향상된 성능을 얻는다. Two-way 릴레이는 양방향 통신을 통해 기존의 반이중 릴레이보다 향상된 주파수 효율을 제공한다. WiBro와 같은 레거시 OFDMA 시스템에 two-way DF (decode-and-forward) 릴레이를 적용 시 파일롯 재배치 문제와 자가 간섭으로 인한 채널 추정 및 수신 신호 복호가 어려운 문제가 발생한다. 본 논문에서는 WiBro와 같은 레거시 OFDMA 시스템에 적합한 two-way DF 릴레이 자원 할당 기법을 제안한다. 제안된 기법은 릴레이와 연결된 노드를 기지국과 직접 통신하는 노드로 간주하여 부반송파를 할당한다. 그리고 다른 노드가 사용하지 않는 부반송파를 중첩 할당함으로써 직교 할당으로 발생하는 대역폭 손실을 보완한다. 전산모의실험을 통해 제안된 자원 할당 기법의 성능 향상을 검증한다.

Threshold-based Filtering Buffer Management Scheme in a Shared Buffer Packet Switch

  • Yang, Jui-Pin;Liang, Ming-Cheng;Chu, Yuan-Sun
    • Journal of Communications and Networks
    • /
    • 제5권1호
    • /
    • pp.82-89
    • /
    • 2003
  • In this paper, an efficient threshold-based filtering (TF) buffer management scheme is proposed. The TF is capable of minimizing the overall loss performance and improving the fairness of buffer usage in a shared buffer packet switch. The TF consists of two mechanisms. One mechanism is to classify the output ports as sctive or inactive by comparing their queue lengths with a dedicated buffer allocation factor. The other mechanism is to filter the arrival packets of inactive output ports when the total queue length exceeds a threshold value. A theoretical queuing model of TF is formulated and resolved for the overall packet loss probability. Computer simulations are used to compare the overall loss performance of TF, dynamic threshold (DT), static threshold (ST) and pushout (PO). We find that TF scheme is more robust against dynamic traffic variations than DT and ST. Also, although the over-all loss performance between TF and PO are close to each other, the implementation of TF is much simpler than the PO.

Adaptive Importance Channel Selection for Perceptual Image Compression

  • He, Yifan;Li, Feng;Bai, Huihui;Zhao, Yao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권9호
    • /
    • pp.3823-3840
    • /
    • 2020
  • Recently, auto-encoder has emerged as the most popular method in convolutional neural network (CNN) based image compression and has achieved impressive performance. In the traditional auto-encoder based image compression model, the encoder simply sends the features of last layer to the decoder, which cannot allocate bits over different spatial regions in an efficient way. Besides, these methods do not fully exploit the contextual information under different receptive fields for better reconstruction performance. In this paper, to solve these issues, a novel auto-encoder model is designed for image compression, which can effectively transmit the hierarchical features of the encoder to the decoder. Specifically, we first propose an adaptive bit-allocation strategy, which can adaptively select an importance channel. Then, we conduct the multiply operation on the generated importance mask and the features of the last layer in our proposed encoder to achieve efficient bit allocation. Moreover, we present an additional novel perceptual loss function for more accurate image details. Extensive experiments demonstrated that the proposed model can achieve significant superiority compared with JPEG and JPEG2000 both in both subjective and objective quality. Besides, our model shows better performance than the state-of-the-art convolutional neural network (CNN)-based image compression methods in terms of PSNR.

LTE 셀룰라 시스템과 무선랜의 양립성 분석 (Compatibility between LTE Cellular Systems and WLAN)

  • 조한신
    • 한국전자파학회논문지
    • /
    • 제26권2호
    • /
    • pp.171-178
    • /
    • 2015
  • 국제무선접속 표준그룹인 3GPP에서 정의된 long-term evolution(LTE) 셀룰라 서비스 후보대역 중 2.3~2.4 GHz 대역은 무선랜 대역(2.4~2.5 GHz)과 인접하고 있기 때문에 두 시스템의 양립성 평가를 위한 간섭 분석 연구가 필요하다. 본 연구에서는 무선랜으로부터의 간섭신호가 LTE 시스템에 미치는 영향을 정확히 분석하기 위해 동적 시스템 시뮬레이션을 이용한 간섭 분석 방법을 제안한다. 동적 시스템 시뮬레이션은 공간/시간/주파수에 따라 변하는 시스템 변수를 모사하여 실제 환경에 유사한 결과를 예측할 수 있는 장점을 갖고 있다. 제안한 시뮬레이터를 이용해 두 시스템 간의 주파수 이격의 변화에 따른 LTE 하향 링크 throughput 감소율을 계산하였다. 두 시스템 간에 11 MHz(무선랜 AP의 동일 채널 할당) 및 10 MHz(무선랜 AP의 3채널 할당)의 보호대역을 설정할 경우, 1 % 미만의 throughput 감소율을 확보할 수 있었다.

UHF 대역 RFID 전파경로에서의 전파간섭 모델링 및 채널 운용 방안 제안 (Modeling of Propagation Interference and Channel Application Solution Suggestion In the UHF Band RFID Propagation Path)

  • 문영주;여선미;전부원;노형환;정명섭;오하령;성영락;박준석
    • 전기학회논문지
    • /
    • 제57권11호
    • /
    • pp.2047-2053
    • /
    • 2008
  • Auto-ID industries and their services have been improved since decades ago, and radio frequency identification (RFID) has been contributing in many applications. Product management can be the foremost example. In our industrial experiences, RFID in ultra high frequency (UHF) band provides much longer interrogation ranges than that of 13.56MHz; many more applications exist thereby. There should be several interesting and useful ideas on UHF RFID; however, those ideas can be limited due to the inevitable environmental circumstances that restrict the interrogation range in shorten value. This paper discusses the propagation interference among different types of readers (e.g, mobile RFID readers in stationary reader zone) in dense-reader environment. In most cases, UHF RFIDs in Korea will be dependent on the UHF mobile RFIDs. In this sense, the UHF mobile users accidently move into the stationary reader's interrogation zone. This is serious problem. In this paper, we analyze propagation loss and propose the effective channel allocation scheme that can contribute developing less-invasive UHF RFID networks. The simulation and practical measurement process using the commercial CAD tools and measurement equipments are presented.

무선 패킷 데이터를 위한 Burst switching의 모델링 및 분석 (Modeling and Analysis of Burst Switching for Wireless Packet Data)

  • 박경인;이채영
    • 대한산업공학회지
    • /
    • 제28권2호
    • /
    • pp.139-146
    • /
    • 2002
  • The third generation mobile communication needs to provide multimedia service with increased data rates. Thus an efficient allocation of radio and network resources is very important. This paper models the 'burst switching' as an efficient radio resource allocation scheme and the performance is compared to the circuit and packet switching. In burst switching, radio resource is allocated to a call for the duration of data bursts rather than an entire session or a single packet as in the case of circuit and packet switching. After a stream of data burst, if a packet does not arrive during timer2 value ($\tau_{2}$), the channel of physical layer is released and the call stays in suspended state. Again if a packet does not arrive for timerl value ($\tau_{1}$) in the suspended state, the upper layer is also released. Thus the two timer values to minimize the sum of access delay and queuing delay need to be determined. In this paper, we focus on the decision of $\tau_{2}$ which minimizes the access and queueing delay with the assumption that traffic arrivals follow Poison process. The simulation, however, is performed with Pareto distribution which well describes the bursty traffic. The computational results show that the delay and the packet loss probability by the burst switching is dramatically reduced compared to the packet switching.

Fault- Tolerant Tasking and Guidance of an Airborne Location Sensor Network

  • Wu, N.Eva;Guo, Yan;Huang, Kun;Ruschmann, Matthew C.;Fowler, Mark L.
    • International Journal of Control, Automation, and Systems
    • /
    • 제6권3호
    • /
    • pp.351-363
    • /
    • 2008
  • This paper is concerned with tasking and guidance of networked airborne sensors to achieve fault-tolerant sensing. The sensors are coordinated to locate hostile transmitters by intercepting and processing their signals. Faults occur when some sensor-carrying vehicles engaged in target location missions are lost. Faults effectively change the network architecture and therefore degrade the network performance. The first objective of the paper is to optimally allocate a finite number of sensors to targets to maximize the network life and availability. To that end allocation policies are solved from relevant Markov decision problems. The sensors allocated to a target must continue to adjust their trajectories until the estimate of the target location reaches a prescribed accuracy. The second objective of the paper is to establish a criterion for vehicle guidance for which fault-tolerant sensing is achieved by incorporating the knowledge of vehicle loss probability, and by allowing network reconfiguration in the event of loss of vehicles. Superior sensing performance in terms of location accuracy is demonstrated under the established criterion.