For a long time, I’ve wanted to post an article about the options available for those seeking to establish a high-availability, geographically diverse web presence.
The flexibility with which we can turn up and down Internet-connected compute power using virtualisation is one thing, but what considerations must we take into account in order to achieve the resilience and high-availability expected in today’s web sites and services?
I got the necessary kick in the backside I needed in December when F5 Networks refreshed the pricing schedule for the virtual version of their LTM/GTS suite and specifically included a lab version.
With a hefty travel schedule ahead, I leapt upon the opportunity of an upcoming Friday afternoon session (forget static project analysis: all the best IT projects start off as JFDI Friday-afternoon projects!) to trial the capabilities of this kit with a view to drawing up recommendations for customers. I hastily rushed together the mandatory proof of concept design document.

Low-level design document
Unfortunately however, while we can turn up and down Internet-connected compute power almost as easily as water, the spanner in the works comes in the form of software licensing. Although there appears to be at least one US-based reseller able to do automated online delivery of software licences for the F5 virtual images, trying to get the same in Europe was proving to be problematic, and in my case the experience started to look more than a little reminiscent of ordering electronics in the 80s and having to sit out the “up to 28 days for delivery” wait.
Delivery of software licenses or activation keys is a real obstacle in the path of smooth running for IaaS/Cloud providers, because while cloud provisioning systems are often well-versed in setting up virtual machine parameters, attaching virtual disk and network and some such, deployment of license-enabling software is much less standardised.
My Friday afternoon was starting to look bleak. Without the available F5 licences then, I was forced to reconsider my options, (or worse: get on with that dreadful management report I’d been avoiding!). How could I possibly implement a highly-available, geographically diverse web site without using some sort of intelligent load balancing device? Impossible, surely? Well, perhaps not.
To understand how, let’s revisit the major perceived problems that the Load Balancer solves, and the methods it uses to solve them, and then ask whether that’s the only way, or indeed if the problem really is a problem, or has technology moved on.
The load balancer typically deals with the challenge two-fold:
- Globally, mapping a DNS name to a group of location-specific IP addresses that host the service,
- Within a site, mapping the DNS-resolved IP address to locally significant worker server on a flow-by-flow basis.
The first function, DNS, is generally useful to mitigate the limitations of a remote client who uses the traditional gethostbyname()
system calls to map a name to an IP address. This client typically obtains only a single address associated with the server and this information is cached for a certain period. Even if a server host publishes multiple addresses, the client is likely to get stuck on a single address until the DNS cache expires. The load balancer solution addresses this by acting as the DNS server and conditionally publishing the address in DNS with a deliberately low expiry time.
The second function is useful for abstracting a group of working servers into a single point of contact and IP address, so that the traffic demands imposed can exceed the capacity of a single server alone. The load balancer sits in the traffic flow and routes requests to working servers based upon:
- Simple IP-layer criteria, such as TCP ports, which can indicate conversations
- More complex HTTP-layer requests
Both functions are generally enriched with a health-checking mechanism that ensures the server being selected is operable and functional at the IP and application layer.
The first function is effective, but demands that everyone respects DNS record expiry timers, since the DNS server reserves the right to change the list of working servers as servers are enabled or disabled. While this respect is largely commonplace on the Internet, it isn’t without cost. Basically every access ISP has to retain effective location state of content servers on behalf of its customers. But it’s an accepted norm today.
The second function is transparent to the end-user or access ISP but, because it sits in the traffic path, it has a cost that relates very closely to the aggregate capacity of all the worker servers so this can quickly become significant. Additionally, the interface cost on Load Balancer equipment is not the same as switch ports or router ports.
In fact the Load Balancer technology is quite significantly more complex than either router or switches: it requires network capacity of the same order as routers and switches, but network capability more in line with servers with general-purpose CPUs, since it needs to disassemble application PDUs into meaningful requests and determine which can be treated as parallel flows. For example, if the general path of navigation through a site is a landing page, to a session ID and authentication and association with a user account, it’s probably very important to sequence the raw transactions that form this exchange and pin them to a specific working server to minimise unnecessary inter-server communications. But different clients from different locations, with no need of data sharing – ie. why should user A be able to see user B’s shopping cart – can be partitioned off (to shard, v. t., in database-speak) and serviced by a different worker server in the pool.
How does all this affect my beermat-grade trial of highly-available, geographically-diverse web services and the rather relaxed pace of swivel-chair automation happening at the F5 software licencing department, you might ask?
Well, what if we didn’t need the Load Balancer at all? What if the client could simply find a working server, connect to it, do business, and walk away. No burden on the global DNS system, no extra cost or complexity for the content provider.
Clearly if one were writing a client/server network application today, one would modify the usual gethostbyname()
/connect()
pattern, and instead adopt an algorithm similar to the following: “Try all available servers until you find one, and then do your business with him. If you fail to do your business with him, move on to the next one.”
Providing the server list were suitable randomised and the list of clients large, we would manage to deal with all of the following problems:
- distributing load amongst capable servers,
- determining a working server from a dead server
- maintaining session affinity for the client/server relationship
It turns out that we don’t need to wish for very long. Most modern operating systems have a successor to the much-loved traditional gethostbyname()
resolver call, usually in the form of getaddrinfo()
. The behaviour is similar, but instead of returning a single address associated with a name, it returns ALL of them, allowing an application to implement the pseudo-logic I described above. Indeed Apple’s Darwin reference manual even goes as far as citing an example. Furthermore, a whole host of modern apps, including most browsers, adopt this strategy when trying to select an endpoint address (and transport – we need to think about IPv6 as well!) to which to connect.
With this glint of light at the end of the tunnel, it was looking like that beermat might make it home before midnight after all, and we could get something workable and repeatable. I hastily set to work, with pen at hand to scribe the endeavour for reference, critique, improvement and general comment.
Basic Server Infrastructure
To kick-start things I peeled off two virtual machine images of OpenBSD and used Interoute VDC’s Control Centre to boot them in London and Geneva respectively. OpenBSD is a fantastic choice if you have a good grasp of the software you’re going to need to run because it is essentially capable, but minimal. I don’t want to ignore security aspects, but I certainly don’t want to waste time disabling software features that I don’t want to expose to the outside world either.
After locking the root account and configuring sudo
to get the necessary administrative access from a known user account, I can focus on the traffic the box needs to deal with. I simply need vanilla HTTP, which OpenBSD’s default firewall filter software, pf
can easily be adjusted to allow as the only protocol inbound, while being more permissive outbound. Additionally, pf
allows us to bend incoming tcp/80 to a port number that doesn’t require super-user privileges. This is a nice alternative to the oft-seen start/bind address/listen/drop privileges pattern ordinarily used to sandbox apps in the UNIX permissions system.
Software
With a basic system established, we can focus on the main client/server application. We want something quick and basic to demonstrate the principle, but it has to be visible as well or it will be mind-numbingly dull. And because we’re trying to show high availability here, we need to make the server flakey, in a predictable kind of way, so that we can get to see some occasional failover events. Flakey – well I can do that with my eyes shut, but we’ll have to work on predictable!
Standing on the shoulders of giants, I settled on the current darlings of the real-time web world, Node.JS and socket.io to produce a very simplistic web site that would deliver an HTML page and some Javascript to instruct browsers to connect to a real-time WebSocket and display some essential server performance data such as memory and CPU.
WebSockets are a relatively new technology in the web arena but they’ve been through a punishing standards draft and re-draft exercise, so socket.io is quite invaluable for abstracting this pain away from us. The takeaway here is that WebSockets are not mandatory for the high-availability functionality to work. They simply help illustrate the mechanics within the browser demonstration.
To get the flakiness and the fail-over stimulation, we map a mouse click event on the browser pane to a pseudo-malloc() on the server, by relaying it via the WebSocket. Instantly visible on the responding performance graphs, the user can see how much “memory” the server is consuming and when it will crash. When it does, he can also witness the real-time comms switch over to the alternate site. The same logic can be applied to a brute-force user-initiated Ctrl-R refresh.
100 lines of HTML/Javascript later and there’s something that works. Node.JS handles the HTTP, elegantly multiplexing incoming requests between the main app and the ancillary socket.io infrastructure to provide the browser real-time comms. Art is certainly not my forte as we established with the beermat, so the help of RGraph is enlisted to save us from ASCII-art wrapped in pre-tags.

Early graph prototype, IETF-stylee
The finished result, available here, is a practical example of how a simplistic web application can be made highly-available through service via geographically-diverse nodes requiring very little specialist or complex configuration. There’s no load balancer in sight, in either the local traffic manager role, nor the DNS-cum-global traffic manager role – traffic direction is dealt with between the client and the server. We also manage to ward off the obligatory stretched LAN running Poor Man’s Routing Protocol between the two cities.

Look, Ma! No load balancer!
Conclusion
It is fair to say that traffic direction has had a difficult time in Internet history, and there’s good reason for the established base of specialist vendors with load balancing equipment today. But with modern advances in Internet software it is not unreasonable to expect client/server applications to consider the traffic direction and server location function implicitly.
What is shown here is that for browser-based web apps, there is a zero-admin “just works” compromise that exists between the single site availability model and full-on global traffic management model. Providing one has no requirement to localise content to regions or nominate primary or secondary servers, out of the box functionality generally does the right thing.
Where does this leave the specialist network equipment vendors? While people such as F5 and others have built enviable reputations based on the load balancing problem, their dedicated hardware equipment combines both capacity and capability. As a result it becomes a powerful edge function, and load balancing is only one thing in its arsenal. Perhaps of increasing relevance are these capabilities:
- translation capabilities such as IPv6/IPv4 session brokering to allow content servers to enjoy the relative simplicity of being single-stacked
- next-generation web protocol security based upon policy written in terms of URLs and requests rather than TCP and UDP ports.
References
Source code for the test
Really Great Graphing tools for Javascript. Richard Heyes.
node.js: Server-side Javscript using Google’s V8 JS engine
socket.io: Cross-platform network I/O for the browser
Experiences on Round Robin DNS from the Bureau of Economic Research
Pete Tenereillo on Global Server Load Balancing
Update Jan 22 2014: Diligent and conscientious staff at F5 and ComputerLinks have managed to furnish me with the original licenses sought for the lab edition of the F5 BIG-IP Virtual Edition. Thanks all for the efforts. Much more potential for exploration. If you’re keen to get your hands on the F5 BIG-IP Virtual Edition, See the details here.
.
This post is on 13 spot in google’s search results, if you want more traffic, you should build more
backlinks to your content, there is one trick to get free, hidden backlinks from authority forums,
search on youtube; how to get hidden backlinks from forums