Search Content

Featured Content

Content Categories

What PaaS Should Learn from the August 2003 Blackouts

We’re in an interesting transition within the software space. Specifically, we’re all in agreement that SaaS is changing the industry in a very positive way. More importantly, however, is the recent evolution of platform as a service (PaaS), which is where Apprenda lives. Recently, we’ve seen more and more noise around PaaS (both consumer and enterprise PaaS offerings) with players such as Google via their AppEngine, Salesforce via and Amazon via EC2 introducing PaaS flavors at different levels in the PaaS taxonomy. As I’ve mentioned on other occassions, I enjoy looking at the histories of other industries and looking for lessons. One analogous situation that is chock full of lessons is the business of electricity, it’s generation and it’s distribution.

Now that more companies are throwing their hats into the PaaS arena, I’m keeping an eye on what could become a disturbing scenario. Greg Matter from Sun Microsystems wrote a blog post about a year and a half ago where he stated that “…there will be, more or less, five hyperscale, pan-global broadband computing services giants.” For the most part, this is an ok scenario at a specific layer - the delivery platform layer. Disturbingly, what we’re seeing is that some of the major players like Google and Amazon are approaching the market in a monolithic stack fashion. That is, even though their PaaS offerings are in their infancy, they seem to be adamant at controlling all aspects of the PaaS stack. AppEngine creates a set of libraries, which sits on a service delivery layer sitting atop of Google’s compute resources. creates a library and language layer, which sits atop a delivery platform, which sits atop compute resources. What I’m seeing is a disturbing trend that as an industry we should avoid. Let’s take a page from the history (and future) of the electric industry.

Transmission of electricity and power in the United States has faced two major outages; one in the 1970’s in New York City and another in 2003 that affected a majority of the Northeast. In the U.S., electricity generation and distribution are separated from transmission, thus creating functional decentralization. However, generation and distribution is highly centralized. The problem with this is that it creates large single points of failure with significant, non localized downside in the event of malfunction, is conducive to congestion, and unmanageable complexity. Now, some of these problems predominantly affect efficiencies (such as being too complex) while others affect customers (single point of failure) in near catastrophic ways. Both the blackout of the 70’s and of 2003 were costly and devastating, bringing parts of the U.S. to a veritable halt.

In 2003, I was working in Manhattan during the blackout and ended up having to walk across the Brooklyn Bridge in what almost seemed to be a pre-apocalyptic scene from a movie. This was a direct result of a poorly designed system that outgrew itself and couldn’t scale well because the centralized nature of the grid. As a result of these issues, many have suggested introducing decentralization to America’s power systems: individuals such as Al Gore have made it a key point to create a decentralized “electranet” while action groups such as the Union of Concerned Scientists have pointed to decentralization as necessary and critical to insure reliable power. We created a system that did not anticipate our massive energy needs and that unecessarily coupled at strategic layers. Now we have to do significant work to try and correct errors and prevent the cost of failure that is growing superlinearly with our growth in consumption. A majority of proposed corrections have to do with breaking a part a monolithic electric grid stack into manageable layers. Notably, this partitioning may even help create a scenario that is much more free trade friendly and more efficient: the notion that somehow this monolithic approach created some sort of benefit through tight coupling and specialization is absurd.

PaaS and SaaS (which in the future will all most likely be hosted via one platform or another) create a producer -> distributor -> consumer scenario familiar to those in electric power distribution. From my perspective, the idea that PaaS providers want to establish monolithic stacks is akin to the centralization of power generation and distribution, which comes with all the associated downside and ris. From the top down, most complete PaaS offerings will come with an API/service integration layer, a service delivery platform (SDP) layer, and a compute layer. But are these layers unnecessarily coupled?

When looked at it from the strictest sense, APIs and the underlying platform establish a well known functional contract between the application and the SDP. If a software company looking at a PaaS offering agrees to that functional contract and can extract all functionality they need from the PaaS, they commit to the PaaS at that level. Once built, applications are deployed, managed and operated through the PaaS. Service contracts exist between the SDP layer and the compute resources that the SDP lives on. This is an ongoing relationship that carries good level of risk not measurable by commitment to the PaaS at the functional contract level. Once up and running, the opaque nature of the compute resources from the applications perspective creates a scenario where the application and its creator have little control and recourse.

A PaaS provider may start off giving you exceptional service, but over the course of a few years, performance degrades drastically. This degradation tends to occur at the compute resource layer or at the linkage between the SDP and compute resources layer. You’ve committed to the API and SDP, making it difficult to decouple. Worse yet, being on a monolithic PaaS stack, you can’t alleviate the degradation in service contract since the provider is one and the same in an exclusive relationship where by virtue of agreeing to the functional contract, you’ve agreed to the service contract both current and future. This is the same ridiculous architecture we’ve inherited in power generation and distribution. If Greg Matter is correct and things shakeout to 5 (more or less) PaaS providers, I sure hope it’s not 5 monolithic stacks. I’d hate to see the results of the first blackout. My goal is to push toward a much more resilient mechanism for PaaS, avoiding the pitfalls that come with gross centralization that is present in purely monolithic stacks. Yes, some might say “But company X would NEVER fail us” - I’m sure people also said “Our electric grid will never go down” and “Enron is too big to go out of business.” History is the world’s teacher, and if we’re wise, we’ll be attentive students.

Do you feel that it will be ok to function atop a few monolithic stacks, or should the PaaS focus be on decentralized PaaS mechanisms? What are your predictions for the future of SaaS and PaaS?

Related Salesforce Software Articles

Why vendors may learn to love independent CRM cons

I get the impression that most CRM vendors are at best deeply suspicious of CRM consultants. Not surprisingly, as a CRM consultant myself, I’d argue the negative impression is undeserved, and that we provide considerably more benefit to vendors than...

Read more about Why vendors may learn to love independent CRM consultants......

What Do Open Source and On-Demand Have in Common?

They both keep Steve Ballmer awake at night? Nope, too easy. They both changed the game in enterprise software? True. But the most interesting answer is "community" - they are both driven, and advanced by, the power of the community that...

Read more about What Do Open Source and On-Demand Have in Common?...