Skip to main content

The HiSSS of Infrastructure - Part 1

Over the course of my career, I've come to specialize more on a portion of Information Technology called infrastructure. Namely, the underlying support systems that allow all of the cool internet based services we know and love, to flourish and operate without a second thought. These support systems consist not only of physical hardware, such as servers, switches, routers, storage arrays, and so on, but also of the support software that drives these physical systems. Often that includes things such as application servers, proxy servers, network device operating systems, and various shared applications such as e-mail, messaging, and workflow management. Although in the case of most shared software, a team outside of infrastructure manages the application from a user perspective, infrastructure often takes the lead in managing upgrades, software patches, and physical implementation design.

The method that is used to manage these types of systems are varied, and depend greatly on the situation as well as personal philosophy. As a liberal arts technologist, the 'philosophy' behind how you do something has great value to me, so I'm going to spend some blog posts outlining my philosophy of infrastructure management. In that same liberal arts vein, I've come up with an acronym for my philosophy which I call the HiSSS of infrastructure.
  • Highly Available
  • Stable
  • Scalable
  • Secure
In this first installment, we're going to talk about High Availability. Simply put, in non-technical terms, a system is highly available if it is always available when it is expected to be. High availability doesn't just apply to large infrastructures, but to things in our everyday life. We expect our cars, alarm clocks, refrigerators, air conditioners, etc., to all be highly available. We want them to be running when we expect them to be running, without question. Just like when our air conditioner suddenly refuses to fire up at the start of summer, we get just as upset when our computer systems, such as Facebook, e-mail, or Google, suddenly disappear. We have a high expectation of when we want these systems available for our use, so lots of smart people, spend a lot of time and money to make sure that these infrastructures are highly available. 

So how are systems made highly available? One of the most common methods in infrastructure management is called redundancy. Very simply, you never have just one piece of hardware doing a single function. You always duplicate things, so that if one piece of hardware or software malfunctions, you can seamlessly switch over to another system. Unlike our houses where we don't have multiple washing machines, or multiple furnaces, most infrastructures are built on the basic premise that redundancy will be built in to every single facet of the system. You never want to have one single point of failure if at all possible. Redundancy is such a basic fact of infrastructure management that it gets applied down to the level of multiple power supplies, multiple network interfaces, and so on, inside a single piece of server hardware. 

Although having perfect redundancy is great, there are times when systems have to be brought down for various reasons. Hardware maintenance as well as software upgrades are one example of situations where a system might be removed from a highly available pool. Another aspect of infrastructure management and high availability goes beyond physical hardware, to developing a set of policies and procedures to ensure that when a system is taken out of service, it isn't noticed. Being 'invisible' is another key factor in high availability. A primary motivator in any infrastructure management plan is to never be seen unless you have to be.

At one employer, we utilized a system of multiple independent application servers to achieve invisibility. Since we had 3-4 machines serving the public at any one time, we could pull one out of service for a hardware or software upgrade, and then rotate it back in to service when it was completed, continuing the process for all the systems in the pool. This allowed us to do even large software upgrades with almost no disruption to the end users. That meant better service to the customers, and happier management.

A sister concept to invisibility is the notion of segmentation. One of the reasons that we were able to maintain such invisibility, was because we could often pull out and replace just small portions of the systems at a time. By choosing to modularize many of our systems, it allowed for upgrades that were often small and very isolated to one single function of the system. This type of segmentation doesn't always come cheap, and takes a very strong architectural design to implement, both from an infrastructure perspective as well as an application development one. However, with good segmentation most of a system can survive upgrades and maintenance without even notices things going on in other portions.

Being highly available, with it's goals of redundancy, invisibility, and segmentation means that concepts such as Continuous Deployment and other Agile development and business methodologies are able to happen much, much easier. Many shops talk about wanting to move in these new directions, but many times you need to first establish a solid foundation before you can build the mansion. High availability is one pillar in that foundation.

Comments

Popular posts from this blog

The beat goes on

Yesterday Apple revealed their long awaited entry into the streaming music field. They were able to do this quickly because of the acquisition of Beats last year, and the systems and intellectual property that came with that purchase. Considering that the music reveal was pretty much the only big news out of a pretty benign developer keynote, I'll take a few moments to talk about what I think about it. Apple was perhaps the defining company in the music revolution of the past 20 years. With the introduction of the iPod that revolutionized portable music, to the creation of the iTunes store and the eventual death of DRM, Apple has been at the forefront of digital music. This leadership comes with high expectations to continue to lead, and so many people have long questioned Apple not getting into the streaming music business quicker. For the past few years new companies have come forth to lead the change in the streaming music evolution. From Pandora and its ability to create un

The NEW Microsoft

Today Microsoft held their Build conference keynote. As with Apple and Google, developer conference keynotes have become a mainstay of announcements for the general public beyond developers. At first it seemed that Microsoft would be bucking that trend today as the first portions of their keynote were very, very developer centric. However, a lot changed when they started talking about Windows 10. Microsoft is betting the future on building a platform that applications will build off of. Much like Apple and Google, they seem to be discovering that the real money isn't in the operating system itself, but in helping bring applications to consumers through validated app stores. In Microsoft's case it's also seeking to converge all of their platforms into a single unified platform. They once again reiterated today that Windows 10 will run on all of the devices that are out there, from phones to tablets to PC's to XBox game consoles. This means that applications can be writ

Welcome do double digits Mr. Windows

This past week was big for Microsoft and it's future with Windows. Windows 10 was given star status at a press reveal, showing off all of the new features that will be coming in this highly anticipated update to many of our desktops. I watched the live blog of the event, and have been reading over a lot of the reviews of the new technology that Microsoft is looking to deploy. My initial reaction is to be impressed. Much of what was wrong with Windows in the past seems to be a focal point for fixing in Windows 10. A few key things stood out to me as areas that I'm anxious to see more. First, I have to applaud Microsoft for being willing to step back from a design decision (Metro) that didn't pan out they way that they wanted it to. They took what they learned from that experience and have incorporated it into the regular desktop experience in a way that is much more seamless and useful. In fact, Microsoft is ahead of the curve in how they are presenting a user interface