DOI QR코드

DOI QR Code

Datacenter-Oriented Elastic Optical Networks: Architecture, Operation, and Solutions

  • Peng, Limei (Department of Industrial Engineering, Ajou University) ;
  • Sun, Yantao (School of Computer and Information Technology, Beijing Jiaotong University) ;
  • Chen, Min (School of Computer Science and Technology, Huazhong University) ;
  • Park, Kiejin (Department of Industrial Engineering, Ajou University)
  • Received : 2014.06.21
  • Accepted : 2014.09.16
  • Published : 2014.11.30

Abstract

With the exponentially increasing Internet traffic and emergence of more versatile and heterogeneous applications, the design of datacenter networks (DCNs) is subject to unprecedented requirements for larger capacity and more flexible switching granularities. Envisioning Optical-Orthogonal Frequency Division Multiplexing (O-OFDM) as a promising candidate for such a scenario, we motivate the use of O-OFDM as the underlying switching technology in order to provide sufficient switching capacity and elastic bandwidth allocation. For this purpose, this article reviews the recent progresses of DCN deployment and assesses the scenario where the O-OFDM transmission and switching technology is employed in the underlying transport plane. We discuss the key issues of the datacenter-oriented O-OFDM optical networks, and in particular, elaborate on a number of open issues and solutions including system interconnection architecture, routing and resource assignment, survivability, and energy-efficiency.

Keywords

1. Introduction

Internet has become indispensable in our daily life and demonstrates two important trends. Firstly, the number of clients is increasing twofold every year. Secondly, the types of data traffic are becoming more diverse, which range from bandwidth-intensive video services to datacenter-oriented cloud computing services. A typical datacenter consists of hundreds of pods, each connected to as many as thousands of servers through 1-GE/10-GEports. Thus, the number of servers contained in a datacenter can be as large as one hundred thousands, and the required peak communication bandwidth can reach up to 100Terabit per second (Tbps), or even higher [2].The interconnection among servers is supported by a datacenter network (DCN), and its design is subject to a number of stringent requirements that cannot be properly met by the legacy Internet infrastructure [1]. Firstly, a DCN should be equipped with a huge transmission capacity to satisfy the fluctuating traffic volumes and scale with the increasing traffic demand. Secondly, the switching granularity in a DCN should be dynamically and flexibly reconfigurable to cater to various service bandwidth requirements and make the best use of the available network capacity. The two design goals, nonetheless, should be met with minimal construction cost, satisfied reliability/survivability, and high energy efficiency [3].

The supporting technology of DCN interconnection can be generally classified into three categories: electrical switching [1], hybrid electrical/optical switching [4,5], and optical switching [6-9]. Since the number of switching ports supported by an electrical switch is limited, an electrical DCN usually employs server-oriented multi-tier interconnection architecture [1] to provide sufficient ports to satisfy the requirement of non-blocking all-to-all communications among a huge number of servers. In [1], an electrical DCN architecture with a Fat-Tree topology of three tiers was provided. The three tiers include edge, aggregate, and core switch layers, each of which is deployed with a large number of electrical switches. Generally, the electrical switching-based DCNs may perform reasonably well under a high oversubscription ratio. However, the system is subject to a serious scalability problem in terms of switching capacity and power consumption with increasing numbers of servers and/or pods.

Hybrid electrical/optical switching architectures, named Helios and HyPaC, which partially employ the optical circuit switching (OCS) technology in the core switching layer, were reported in [4,5]. The core switches employ Micro Electro Mechanical Systems (MEMS) to support OCS for long-lasting traffic flows via wavelength circuits. Although such a hybrid switching approach can alleviate the scalability problem associated with the electrical-switching architecture to some extent, it is still considered as a transition toward an all-optical switching paradigm because a large number of electrical switches are still deployed in the core and dominate the core switching layer, which could lead to traffic bottlenecks. Moreover, the OCS technology with static and coarse switching granularity can neither agilely handle the bursty DCN traffic nor flexibly switch a traffic flow at a desired bandwidth/rate.

Accordingly, people turned to consider an all-optical switching deployment for DCN as a long-term solution [6-9], so as to meet the high switching and transmission capacity requirements within DCNs, as well as to reduce the construction cost and power consumption through minimizing and flattening the DCN intermediate layers. The all-optical switching techniques can be mainly divided into two categories: optical circuit switching (OCS) and optical packet switching (OPS) [10]. OCS is relatively mature, but its coarse switching granularity at the level of wavelength channel prevents it from most efficiently switching the bursty and fine-granularity DCN traffic. On the other hand, OPS is advantageous of fine and adaptive switching capability while subject to some serious problems in the aspect of technological maturity owing to the lack of high-speed optical switching fabrics and all-optical buffers.

Envisioning the demand for an all-optical switching technology that can extend the switching capacity with flexible granularity for the DCN interconnection, we consider the O-OFDM technology as a promising candidate, thanks to its spectral efficiency, transmission capacity, and dynamic bandwidth allocation in different granularities[11]. A few research efforts were reported on applying the O-OFDM switching and transmission technique to DCNs [8], by which a limited extent of understanding was gained on how to take advantage of its huge and elastic switching and transmission capability to construct cost-effective, reliable, and energy-efficient all-optical DCNs. We take it as a missing piece of the state-of-the-art DCN development, and are committed in this paper to the discussion of these open issues and investigating possible solutions based on the requirements of the future DCNs by incorporating the advantages of O-OFDM-based all-optical switching technique.

 

2. All-Optical Datacenter Networks

The task of constructing an all-optical DCN can be simply described as to replace all the existing electrical switches in the core with an all-optical switching network, such that data (e.g., Ethernet frames) from a source pod is converted into optical packets/bursts and switched in the optical domain to a desired destination pod. This task, however, is subject to serious engineering complexity and design challenges due to the lack of a mature technology in making such a fast optical switching fabric that is large enough (in terms of input/output ports) in the core, so as to achieve sufficient switching capacity and fine granularity to satisfy the bursty traffic and huge demands in DCNs.

Several efforts were reported targeting at the large and fast optical switch design under various DCN interconnection architectures. In [6], a new datacenter optical switch (DOS) architecture was proposed and featured by a single arrayed-waveguide grating router (AWGR) an array of tunable wavelength convertors (TWCs), an array of label extractors (LEs), and a loopback shared electronic buffer with additional O/E and E/O conversion circuits, jointly coordinated under a centralized controller. In [7], a three-stage Clos-based architecture was proposed by employing multiple AWGRs at each stage and an array of TWCs between each pair of adjacent stages, in order to improve scalability against using a single AWGR as in [6]. Similar to [6], global scheduling of all the packets among the pods is required to determine the wavelength and transmission time of each packet, which is realized by coordinating multiple schedulers through an iterative algorithm. In [8], an all-optical switching architecture for intra-DCN was proposed using a single N × N passive cyclic arrayed-waveguide grating (CAWG) and N pairs of O-OFDM transceivers. All O-OFDM subcarriers comply with a centralized allocation scheme in order to avoid subcarrier contention. There also exist a few of other proposals for the all-optical DCN architectures, such as IRIS architecture based on AWGR-Clos incorporated with time switch [9], etc. In all these architectures, the top-of-rack (ToR) switch of each pod is connected to the others in a point-to-point manner to enable direct communications via large-scale optical switches. Such direct communications among the ToR switches show merit of small latency, while falling short of weak scalability due to the requirement for large-scale optical switches.

 

3. O-OFDM OPTICAL NETWORKS

The O-OFDM transmission technology shows the following advantages. Firstly, on the transmitter side, by launching an optical pulse in the time domain, the impact of chromatic dispersion (CD) and polarization mode dispersion (PMD) of fiber transmission systems can be effectively alleviated, thereby mitigating the scalability issue of DCNs via significantly increased transmission capacity and distance. Secondly, the O-OFDM technology allows the neighboring subcarriers to overlap in spectrum, which greatly improves the fiber spectrum utilization against the conventional OCS. Thirdly, by dynamically manipulating the number of O-OFDM subcarriers of an optical channel as well as the modulation format of each subcarrier, we can elastically change the transmission capacity of an optical channel to provide a proper bandwidth for each data request, so as to efficiently adapt to the bursty traffic demands purely through a dynamic optical-layer resource allocation process.

A general block diagram of an O-OFDM transmission system is shown in Fig. 1 [12]. At the transmitter side, the input serial data stream is first converted into multiple parallel data streams through a serial-to-parallel convertor, each of which is then mapped onto the corresponding information symbols for the subcarriers through the function block of modulation/symbol mapping. The parallel data streams are modulated onto orthogonal subcarriers and converted to the time-domain signals by applying the inversed discrete Fourier transform (IDFT). The output signal is then converted into an analog format by digital/analog (D/A) convertors and filtered with a low-pass filter (LPF). Finally, the converted analog baseband signal is up-converted with an in-phase/quadrature (I/Q) modulator onto a local optical transmitter. At the receiver side, the coherent detection is processed to demodulate the O-OFDM optical signal from the optical carrier with an optical I/O demodulator. The demodulated O-OFDM signal is converted to a digital format via an A/D convertor, which then sequentially passes a DFT and a data symbol decision module for synchronization, channel estimation, and compensation before making a symbol decision. Finally, multiple bit streams are converted back to a single data stream by parallel-to-serial convertor.

Fig. 1.Diagram blocks of O-OFDM transmission system: a) O-OFDM transmitter; b) O-OFDM receiver [12]

Fig. 2 shows the architecture of an elastic optical network using the O-OFDM switching technology. Each of the four bandwidth variable wavelength cross connect (BV-WXC) nodes [13] adopts a bandwidth variable wavelength selective switch (BV-WSS), such that a cross-connection in each BV-WXC node with flexibly selected spectrum bandwidth can be switched by grouping multiple adjacent spectrum granularities.

Fig. 2.Architecture of elastic O-OFDM optical network [13]

 

4. O-OFDM DCNs: Operation and Solutions

Witnessing the numerous merits by adopting the O-OFDM switching technology to build the core of DCNs, we provide more in-depth descriptions on the datacenter-oriented O-OFDM optical networks in the following four aspects: (1) interconnection architecture, (2) routing and resource assignment, (3) network survivability, and (4) energy-efficiency.

4.1 Interconnection Architecture

As a cost-effective and scalable alternative to the all-optical DCN architectures using large-scale optical switches, we can employ multiple small-and-fast optical switching fabrics to build a large-scale DCN where the pods are interconnected via multi-hop communications. With this, a flattened two-layer architecture is formed, where the pods and the core optical switches are treated in separate layers. Each pod is directly connected to one or more optical core switches via short-reach interfaces, and the core switches are interconnected with each other in a more complex manner (e.g., meshed topology), such that each pod can reach another via multiple hops of core switches.

Fig. 3 shows an example of the two-layer flattened DCN architecture composed of an upper O-OFDM optical network layer and a lower pod layer. The numbers of O-OFDM switch nodes and pods are the same, and a typical architecture of a O-OFDM switch node consists of a BV-WXC-based (as shown in Fig. 2) reconfigurable optical add/drop multiplexer (ROADM), a pair of coherent transceiver, and n pairs of input and output fibers (n is per node degree). In the O-OFDM network layer, the core switches are interconnected with each other via optical fibers in a shape of 3-cube, while the pods are directly connected to the O-OFDM switch nodes via short-reach optical links and to the associated servers via Gigabit Ethernet (GE) ports, respectively. For a larger number of core nodes/pods, O-OFDM core switches can be interconnected in a scaled n-cube (n > 3) manner. To initiate a communication between two servers in different pods, the pod of the source server issues a request to its directly connected O-OFDM core switch, and the request is further forwarded through the O-OFDM network layer to the destination pod wherein it eventually reaches the destination server. Such two-layer architecture gains in scalability over the one using one or multiple large-scale optical switches due to the fact that its total throughput solely depends on the switching capability of the core switching layer and the interconnection architecture of the small-and-fast optical switches, rather than the maximal number of switch ports as that in the other case such as [6-9].

Fig. 3.Datacenter-Oriented O-OFDM optical networks

To date, most of the interconnection architectures for DCNs are based on regular topologies due to their simplicity, self-similarity, and superb scalability. There exist many well developed graph theories and analytical models for scheduling/resource allocation in some regular topologies such as hypercube, butterfly, and shuffle that could serve for the purpose [19]. In addition to the 3-cube topology as in Fig. 3, O-OFDM core switches can also be logically interconnected in other regular topologies as mentioned above.

The performance of the O-OFDM DCN architecture can be evaluated from different perspectives, such as network construction cost, network power consumption, and architecture robustness. When focusing on network cost, we may quantitatively model the cost as a function of the total number of switches, transceivers, and optical fiber/electrical links. Likewise, the analysis on energy cost of DCNs can be performed as a function of the energy consumption per bit rate of switches and transceivers and the total transmitted bit rates over them. For the aspect of DCN robustness, end-to-end service availability could be an interesting metric to measure and model by jointly considering the network connectivity of the DCN architecture, since networks with stronger connectivity generally tends to be more robust.

4.2 Routing and Resource Assignment

The problem of routing and spectrum assignment (RSA) in the O-OFDM based Internet backbone has been extensively studied by dividing it into two sub-problems that can be solved one after the other, i.e., lightpath routing and spectrum assignment, respectively [14]. Nonetheless, few works on RSA have been conducted in O-OFDM based DCNs, and none has been reported regarding the support of O-OFDM DCNs using regular network topologies.

Regular topologies is advantageous of developing efficient static routing algorithms. For example, for an n-cube regular topology, a spanning balanced tree (SBT) can be found at a node to reach all the other nodes via fixed and shortest routing paths [2], and by allocating a SBT for each node, the lightpath routing sub-problem can be easily solved. Fig. 4 shows a SBT established for node 0 (i.e., rooted from node 0) in a 5-cube DCN. Node 0 (i.e.,00000) can reach all other nodes (represented by binary numbers) within 5 hops. Nonetheless, even though such a kind of static routing scheme can take advantage of regular topologies and is simple to manage, it is gained at the expense of possibly unbalanced load distribution and potential bottlenecks due to the traffic burstiness and may lead to congestion on high-load links. A more efficient utilization of the network resources can be achieved by employing dynamic routing mechanisms with adaptive route selection and real-time traffic engineering, while the pod of a DCN serves as the controller that actively collects the dynamic link metrics, such as queue length, average delay, and link utilization within the DCN.

Fig. 4.A spanning balanced tree (SBT)for node 0 (00000) in the 5-cube (with 32 core switches). ST: Sub-tree [2]

For the spectrum resource assignment, two spectrum operational modes can be considered, namely, mini-grid and gridless [15]. Under the mini-grid mode, frequency spacing can be reduced up to 6.25 GHz compared to 100 GHz, 50 GHz, or 25 GHz of the traditional ITU-T standard. Under the gridless mode, the concept of frequency spacing essentially disappears and the spectrum bands can be assigned in an on-demand manner with a guard band between two adjacent spectrum bands. In addition, considering the time-varying characteristics of the DCN traffic, the spectrum resources can be shared time-dependently by two neighboring optical channels at different time slots as that described in [16].

Not as in WANs, most packets in a DCN are pieces fragmented from large files that are distributed to other nodes for storage and computing [2], which provides operational flexibility in packet scheduling or spectrum assignment. More specifically, in addition to dynamically adjusting the modulation format of each subcarrier to maximally satisfy the bandwidth requirement of each packet, files can be elastically segmented into flexible numbers of pieces according to the status of the available spectrum resources on each optical channel to maximally use the spectrum resources.

Finally, spectrum continuity serves as an important constraint on the routing of each packet, by which a lightpath must use the same spectrum on all the traversed fiber links. Studies on the benefit of using wavelength converters showed that the wavelength conversion capability can significantly improve network performance [17]. For an O-OFDM-based DCN, the use of spectrum converters is expected to show a similar benefit on the network performance [18]. To investigate such an impact, the architecture of ROADM node with/without spectrum convertors and related analytical models on lightpath blocking performance should be developed.

4.3 DCN Survivability

A DCN could be subject to one or multiple faults of its network entities such as a node (i.e., a switch) and a fiber link, which can severely affect the originally expected performance of the DCN. The following discussion is on the development of a fast failure recovery scheme in the O-OFDM optical layer to cope with such a circumstance.

For node failure, a datacenter-oriented recovery scheme called “node migration” can be implemented. Considering the fact that most of the packets in DCNs are fragmented from large files and distributed to other nearby DC nodes for storage and computing, a node or a set of nodes is defined as backup node(s) for a specific node failure event if the backup node (or backup nodes) have sufficient free storage and computing resources to recover the failure event through predefined protection lightpaths. Fig. 5 shows an example of node migration in a 3-cube DCN. Under the normal state, node 0 functions as one of the working nodes for nodes 1, 2, and 4. Nodes 1, 2, and 4 can reach node 0 to push or pull their needed resources via respective fiber links, i.e., bidirectional links 1-0, 2-0, and 4-0. When node 0 fails, node 3 and node 7 are employed to provide available computing and storage resources to recover the failure subject to the condition that the backup computing and storage resources provided by the two nodes, i.e., Σi=1,2,4 BCi and Σi=1,2,4 BSi supported at either node 3 or node 7, are no smaller than Σi=1,2,4 WCi and Σi=1,2,4 WSi required at node 0, where WCi/BCi and WSi/BSi are the working/backup computing resource and working/backup storage resource for node i, respectively, and there exists sufficient link capacities for nodes 1, 2, and 4 to transfer their respective requests (i.e., BLi ≥ WLi, i = 1,2,4).

Fig. 5.(a) Normal state without node failure; (b) Recovery via node 3 when node 0 fails; (c) Recovery via node 7 when node 0 fails

There are two important tasks in the design of a node migration scheme. Firstly, the election of an appropriate backup node (or nodes) that can provide sufficient backup storage and computing resources while minimizing the impact on other existing services. Secondly, the assurance that links connected to the backup nodes can provide sufficient bandwidth for data transferring. For efficient resource utilization, these backup resources can be shared by all the protected nodes. Thus, how to efficiently share all these protection network resources is an important research problem subject to further exploration.

Link failure protection schemes have been extensively investigated in the past decades, such as 1+1/1:1 protection, ring protection, shared backup path protection (SBPP), etc., in SDH/SONET and WDM networks [20]. O-OFDM switching technology has the advantage of supporting bandwidth squeezing [21], which is one of its unique features that can greatly improve protection flexibility and efficiency. Fig.6. shows an example of recovering multiple failures under a 3-cube DCN. Only partial links, i.e., 0-1, 0-2, 1-3, 2-3, 6-7, etc., are shown for a clear illustration. The capacities on all links are assumed to be 100Gbps. With the capability of bandwidth squeezing, both failed working paths (i.e., paths 0-1 and 6-7) can be partially recovered by their respective backup paths, i.e., paths 0-2-3-1 and 6-2-3-7. More specifically, X-Gbps (X<100) affected traffic on path 0-1 and (100-X)-Gbps affected traffic on path 6-7 are recovered simultaneously by their respective protection paths which commonly traverse link 2-3.

Fig. 6.Protection and restoration for dual failures 3-cube DCN (only a partial set of links are given)

In the above example, when the total required bandwidth of two failed working lightpaths is less than the capacity of their shared backup link, both of them can be fully recovered; otherwise, mission-critical services are recovered first due to their higher priority. In addition, when a single backup lightpath cannot provide sufficient bandwidth for a failed working lightpath with full recovery requirement, we can distribute the affected bandwidth onto two or more bandwidth-squeezed backup paths, where the total backup bandwidth can accommodate all the affected traffic demands.

4.4 Energy-Efficient Strategy

Several components of a datacenter contribute to the total power consumption, such as severs, infrastructure, power draw, and Networks. Energy proportionality and improvement on energy efficiency of each part have been investigated [22]. From the point of view of network, the energy consumption can be simply associated with the total number of active network devices and the time duration that they have been in the active mode. To reduce the energy consumption of a datacenter network, it is important to minimize the total number of active network devices if their sleeping would not affect the QoS of data communications in a DCN. Research on energy-efficient DCNs based on electronic backbone has been extensively reported, and mainly falls into the following two design dimensions: (1) leaving idle network devices to sleep without affecting the supported services, and (2) applying multi-rate link transmissions (i.e., rate adaptation). However, studies on energy efficiency in all-optical DCNs, or specifically O-OFDMDCNs, are absent.

With the O-OFDM switching technology, one can dynamically change the number of subcarriers on each optical channel and/or adjust the modulation format of each subcarrier while still meeting the total link capacity requirement. This brings tremendous flexibility to the network operation and a large space to manipulation in achieving better energy efficiency. For example, one can reduce the number of subcarriers and/or choose low power-consuming modulation formats for the O-OFDM optical channels if the requirement bandwidth is low. In other words, energy saving can be achieved by either reducing the number of subcarriers used, or choosing a lower level of modulation, or both. Thus, an effective energy efficient scheme should consider network status, traffic distribution, and service requirements, such that the network operator can take the best strategy for minimizing the energy consumption without violating the service requirement (e.g., delay).

 

5. Conclusions

To meet the exponentially increasing and versatile traffic requirements of future datacenter networks (DCNs), a huge transmission and switching capacity as well as a flexible switching capability should be in place. This paper looked into the design and operation of DCNs by considering the O-ODFM technology in the core of the DCNs. The O-OFDM technology cannot only significantly improve the utilization of fiber spectrum, but also bring extreme flexibility in bandwidth allocation. These unique features have been exploited in constructing future-proof DCN interconnection architectures. Several key issues associated with the architecture, design, and operations of the O-OFDM DCNs were discussed and exemplified, including interconnection architecture, routing and resource assignment, network reliability, and energy-efficiency.

References

  1. M. Fares, A. Loukissas, and A. Vahdat, "A Scalable, Commodity Data Center Network Architecture," in Proc. of SIGCOMM'2008, Washington, USA.
  2. L. Peng, C. Youn, W. Tang, and C. Qiao, "A Novel Approach to Optical Switching for Intra-Datacenter Networking," Journal of Lightwave Technology, vol. 30, no. 2, 2012.
  3. "Worldwide Enterprise Communications and Datacenter Networks 2014 Top 10 Predictions," in Proc. of IDC' 2014.
  4. N. Farrington, G. Porter, S. Radhakrishnan, H. Bazzaz, V. Subramanya, Y. Fainman, G. Papen, and A. Vahdat, "Helios: A Hybrid Electrical/Optical Switch Architecture for Modular Data Centers," in Proc. of SIGCOMM'2010, New Delhi, India.
  5. G. Wang, D. Andersen, M. Kaminsky, K. Papagiannaki, T. Eugene Ng, M. Kozuch, and M. Ryan, "c-Through: Part-Time Optics in Data Centers," in Proc. of SIGCOMM'2010, New Delhi, India.
  6. X. Ye, Y. Yin, S. Yoo, P. Mejia, R. Proietti, and V. Akella, "DOS-A Scalable Optical Switch for Datacenters," in Proc. of ANCS'2010, La Jolla, CA, USA,.
  7. K. Xi, Y. Kao, M. Yang, and H. Chao, "Petabit Optical Switch for Data Center Networks," Technique Report: http://eeweb.poly.edu/chao/publications/TechReports.html.
  8. C. Kachris and I. Tomkos, "Optical OFDM-based Data Center Networks," Journal of Networks, vol. 8, no. 7, July, 2013
  9. J. Gripp, J. Simsarian, D. LeGrange, P.Bernasconi, and T. Neilson, "Photonic Terabit Routers: The Iris Project," in Proc. of OFC/NFOEC'2010, San Diego, CA, USA.
  10. K. Kitayama, et. al., "OPS/OCS Intra-Data Center Network with Intelligent Flow Control," in Proc. of PS' 2014.
  11. W. Shieh and C. Athaudage, "Coherent Optical Orthogonal Frequency Division Multiplexing," IEE Electronic Letters, vol. 42, no. 10, 2006.
  12. G. Shen and M. Zukerman, "Spectrum-Efficient and Agile CO-OFDM Optical Transport Networks: Architecture, Design, and Operation," IEEE Communications Magazine, vol. 50, no. 5, 2012.
  13. M. Jinno, H. Takara, B. Kozicki, Y. Tsukishima, Y. Sone, and S. Matsuoka, "Spectrum-Efficient and Scalable Elastic Optical Path Network: Architecture, Benefits, and Enabling Technologies," IEEE Communications Magazine, vol. 47, no. 11, 2009.
  14. S. Talebi, F. Alam, I. Katib, M. Khamis, R. Salama, G. Rouskas, "Spectrum Management Techniques for Elastic Optical Networks: A Survey," Journal of Optical Switching and Networks, vol. 13, pp. 34-48, Feb. 2014. https://doi.org/10.1016/j.osn.2014.02.003
  15. G. Shen and Q. Yang, "From Coarse Grid to Mini-Grid to Gridless: How Much can Gridless Help Contentionless?" in Proc. of OFC/NFOEC'2011, Los Angeles, USA.
  16. G. Shen, Q. Yang, S. You, and W. Shao, "Maximizing Time-Dependent Spectrum Sharing between Neighboring Channels in CO-OFDM Optical Networks," in Proc. of ICTON'2011, Stockholm, Sweden.
  17. C. Politi, C. Matrakidis, and A. Stavdas, "Optical Wavelength and Waveband Converters," in Proc. of ICTON' 2006.
  18. L. Peng, C. Youn, and C. Qiao, "Theoretical Analyses of Lightpath Blocking Performance in CO-OFDM Optical Networks with/without Spectrum Conversion," IEEE Communications Letters, vol. 17, no.4, April 2013.
  19. L. Wittie, "Communication Structures for Large Networks of Microcomputers," IEEE Transactions on Computers, vol. C-30, no. 4, 1981.
  20. P. Ho, "State-of-the-Art Progress in Developing Survivable Routing Schemes in Mesh WDM Networks," IEEE Communications Surveys and Tutorials, vol. 6, no. 4, 2004.
  21. Y. Sone, A. Watanabe, W. Imajuku, Y. Tsukishima, B. Kozicki, H. Takara, and M. Jinno, "Bandwidth Squeezed Restoration in Spectrum-Sliced Elastic Optical Path Networks (SLICE)," Journal of Optical Communications and Networking, vol. 3, no. 3, 2011.
  22. F. T. Chong, A. Saleh, P. Ranganathan, H. Wassel, and M. Heck, "Improving Energy Efficiency in Data Centers Beyond Technology Scaling," IEEE Design and Test, vol. 31, no. 1, Feb. 2014.