If you’ve used your home BT broadband in the UK over the last two weeks, you’ve very probably been an unwitting pioneer of the next-generation of Internet transport technology! Earlier this week, the London Internet Exchange (LINX) announced that they had successfully deployed their first 100Gbps production ports to members, and the first users of this are BT in order to provide a significant upgrade to their intra-UK ISP connectivity prior to the Olympic games, which makes good sense.
The LINX – formed as a mutual co-operative in the mid-1990s by like-minded ISPs keen to avoid transatlantic latencies in inter-ISP transit – is a crucial piece of Internet infrastructure in the UK, and it is responsible for a significant amount of UK-to-UK traffic. Despite the enormous growth in international private-wire telecommunications, it also carries responsibility for a great deal of international traffic as well, and the 195.66.224/5 IPv4 address prefix in use on the LINX LAN is widely recognised by seasoned ISP operators without the need for DNS lookups.
One of the lynchpin technologies behind LINX’s pioneering 100Gbps investment is Juniper’s PTX platform in what I understand may be its first production deployment. I am quite sure that while Juniper are no doubt pleased to have such a prestigious customer for the PTX’s maiden voyage, they are well-aware that they are performing on a major stage and the stakes are high. It’s very hard to top the phenomenal success and reception by customers of the MX-960, but they’ll be hoping to match it.
The PTX is a very interesting beast in Juniper’s service provider portfolio. I’ve followed its development and road-mapped features regularly over the last twelve months, and the thing to note is Juniper’s decision to keep the actual network interfaces on the PTX platform low in features and functionality, in order to keep complexity low, density high and thus price-per-port cheap.
10 and 100G port pricing in the service provider sector is desperately fierce. While consumers using broadband access technology expect Moore’s Law-like bandwidth inclines and content providers increasingly need to deliver bandwidth hungry content such as HD video, the service providers tasked with carrying these 1s and 0s are faced with moving ever-growing amounts of data, but for the same revenue as before. Until now, 100G pricing has remained unappealing and, in some cases, downright unprofitable when compared with equivalent Nx10G pricing. Consequently service providers have often chosen to endure the use of link aggregation technology with the drawbacks and complexity it brings in order to run their backbone.
Juniper will be hoping to coax a change in thinking with aggressive pricing on their PTX 100G interfaces and hopefully burst a damn of service provider sales. They will likely be betting on service providers adopting the PTX in a two-tier network, likely using MX or equivalent at the edge for feature-rich (MPLS VPN, QoS, VPLS, BGP) customer interfaces and PTX providing a much simpler label-switched core at higher-aggregated bandwidth layers. While this makes sense, topologically, introducing a new layer also requires a 10G/100G price-point “bump” before one sees the realities of the efficiencies.
I confidently predict, however, that Juniper will have to revisit their decision of sacrificing features for density and cost on PTX network interfaces. They have absolutely hit the right spot with the interface density: the PTX-5000 claims support for 384 10Gbps ports, or 32 100Gbps ports, which is a step-change on today’s MX, but in a lot of cases service providers need these densities for today’s features as well, let alone tomorrow’s.
There is plenty of precedent in the short history of global networking to support my prediction:
- the ATM/Frame Relay fashions of the 1990s aimed to put ATM in the driving seat of WAN networks mixing premium data, Internet, voice, and other leased-line/constant-bit-rate services.
- Cisco’s GSR portfolio, initially touted as a core box, with the later establishment of its IP Service Edge card.
In addition to this, on today’s large networks, deployment of 100G technology doesn’t completely eliminate link aggregation. Some networks are routinely running 16 or more 10Gs in parallel. In the new 100G world they will still need to aggregate two physical interfaces. Where there is link aggregation, there is a need for traffic load distribution, so a “slim-core” based on PTX may need to look beyond simple label switching and into the actual payload if it is going to be useful.
So in summary, service providers are going to be very tempted by PTX, especially if they are looking to consolidate private wire services and packet services onto one network. And if Juniper have managed to overcome some of the software quality and stability issues that have dogged them recently, we can expect to see a lot more PTX. But service providers must be careful. Specifically, they must:
- fully understand the essential features required of the underlying transport network for the services they offer.
- be sure to factor in capabilities as well as speeds when considering the cost of network and comparing technologies.
I’ll continue to watch this exciting area of large-scale networking with interest.