- Cloud Computing Defined
- NIST Definition of Cloud Computing
- Characteristics of Clouds
- Cloud Service Models
- Cloud Deployment Models
- Conclusion
Characteristics of Clouds
The NIST definition also highlights five essential characteristics of cloud computing:
- Broad network access
- On-demand self-service
- Resource pooling
- Measured service
- Rapid elasticity4
Let’s step through these concepts individually.
First, we cover broad network access. Access to resources in the cloud is available over multiple device types. This not only includes the most common devices (laptops, workstations, and so on) but also mobile phones, thin clients, and the like. Contrast broad network access with access to compute and network resources during the mainframe era. Compute resources 40 years ago were scarce and costly. To conserve those resources, usage was limited based on priority and criticality of workloads. Similarly, network resources were also scarce. IP-based networks were not in prevalent usage four decades ago; consequently, access to ubiquitous high-bandwidth, low-latency networks did not exist. Over time, costs associated with the network (like costs associated with computing and storage) have decreased because of manufacturing scalability, commoditization of associated technologies, and competition in the marketplace. As network bandwidth has increased, network access and scalability have also increased accordingly. Broad network access can and should be seen both as a trait of cloud computing and as an enabler.
On-demand self-service is a key—some say the primary—characteristic of the cloud. Think of IT as a complex supply chain with the application and the end user at the tail end of the chain. In noncloud environments, the ability to self-provision resources fundamentally disrupts most (if not all) of the legacy processes of corporate IT. This includes workflow related to procurement and provisioning of storage, servers, network nodes, software licenses, and so on.
Historically, capacity planning has been performed in “silos” or in isolated organizational structures with little or no communication between decision makers and stakeholders. In noncloud or legacy environments, when the end user can self-provision without interacting with the provider, the downstream result is usually extreme inefficiency and waste.
Self-provisioning in noncloud environments causes legacy processes and functions—such as capacity planning, network management (providing quality of service [QoS]), and security (management of firewalls and access control lists [ACL])—to grind to a halt or even break down completely. The well-documented “bullwhip effect” in supply chain management—when incomplete or inaccurate information results in high variability in production costs—applies not only to manufacturing environments but also to the provisioning of IT resources in noncloud environments.7
Cloud-based architectures, however, are designed and built with self-provisioning in mind. This premise implies the use of fairly sophisticated software frameworks and portals to manage provisioning and back-office functions. Historically, the lack of commercial off-the-shelf (COTS) software purpose-built for cloud automation led many companies to build their own frameworks to support these processes. While many companies do still use homegrown portals, adoption of COTS software packages designed to manage and automate enterprise workloads has increased as major ISVs and startups alike find ways to differentiate their solutions.
Resource pooling is a fundamental premise of scalability in the cloud. Without pooled computing, networks, and storage, a service provider must provision across multiple silos (discrete, independent resources with few or no interconnections.) Multitenant environments, where multiple customers share adjacent resources in the cloud with their peers, are the basis of public cloud infrastructures. With multitenancy, there is an inherent increase in operational expenditures, which can be mitigated by certain hardware configurations and software solutions, such as application and server profiles.
Imagine a telephone network that is not multitenant. This is extremely difficult to do: It would imply dedicated circuits from end to end, all the way from the provider to each and every consumer. Now imagine the expense: not only the exorbitant capital costs of the dedicated hardware but also the operating expenses associated with maintenance. Simple troubleshooting processes would require an operator to authenticate into multiple thousands of systems just to verify access. If a broader system issue affected more than one network, the mean time to recovery (MTTR) would be significant. Without resource pooling and multitenancy, the economics of cloud computing do not make financial sense.
Measured service implies that usage of these pooled resources is monitored and reported to the consumer, providing visibility into rates of consumption and associated costs. Accurate measurement of resource consumption, for the purposes of chargeback (or merely for cross-departmental reporting and planning), has long been a wish-list item for IT stakeholders. Building and supporting a system capable of such granular reporting, however, has always been a tall order.
As computing resources moved from the command-and-control world of the mainframe (where measurement and reporting software was built in to the system) to the controlled chaos of open systems and client-server platforms (where measurement and reporting were bolted on as an afterthought, if at all), visibility into costs and consumption has become increasingly limited. Frequently enough, IT teams have built systems to monitor the usage of one element (the CPU, for example) while using COTS software for another element (perhaps storage).
Tying the two systems together, however, across a large enterprise often becomes a full-time effort. If chargeback is actually implemented, it becomes imperative to drop everything else when the COTS vendor releases a patch or an upgrade; otherwise, access to reporting data is lost. Assuming that usage accounting and reporting are handled accordingly, billing then becomes yet another internal IT function requiring management and full-time equivalent (FTE) resources. Measured service, in terms of the cloud, takes the majority of the above effort out of the equation, thereby dramatically reducing the associated operational expense.
The final trait highlighted in the NIST definition of cloud computing is rapid elasticity. Elastic resources are critical to reducing costs and decreasing time to market (TTM). Indeed, the notion of elastic computing in the IT supply chain is so desirable that Amazon even named its cloud platform Elastic Compute Cloud (EC2). As I demonstrate in later chapters, the majority of the costs associated with deploying applications stems from provisioning (moves, adds, and changes, or MAC) in the IT supply chain. Therefore, simplifying the provisioning process can generate significant cost reductions and enable faster revenue generation.
Think of the workflow and business processes related to the provisioning of a simple application. Whether the application is for external customers or for internal employees, the provisioning processes are often similar (if not identical.) The costs associated with a delayed customer release, however, can be significantly higher. The opportunity costs of a delayed customer-facing application in a highly competitive market can be exorbitant, particularly in terms of customer acquisition and retention. In short, the stakes are much higher with respect to bringing revenue-generating applications to market. We look at different methods of measuring the impact of time-to-market in Chapter 2, “Metrics That Matter—What You Need to Know.”
For a simple application (either internal or external) the typical workflow will look something like the following. Disk storage requirements are gathered prompting the storage workflow—logical unit number (LUN) provisioning and masking, file system creation, and so on. A database is created and disks are allocated. Users are created on the server and the associated database, and privileges are assigned based on roles and responsibilities. Server and application access is granted on the network based on ACLs and IP address assignments.
At each step of this process functional owners (network, storage, and server administrators) have the opportunity to preprovision resources in advance of upcoming requests. Unfortunately, there is also the opportunity for functional owners to overprovision to limit the frequency of requests and to mitigate delays in the supply chain.
Overprovisioning in any one function, however, can also lead to deprivation and delays in the next function, thereby igniting the aforementioned bullwhip effect.8 The costs associated with the bullwhip effect in a typical IT supply chain can be significant. Waste associated with poor resource utilization can easily cost multiple millions of dollars a year in a medium to large enterprise. Delays in deprovisioning unused or unneeded resources also add to this waste factor, increasing poor utilization rates. Imagine the expense of a hotel with no capability to book rooms. That unlikely scenario occurs frequently in IT when projects are cancelled or discontinued. Legacy funding models assume allocated capital expenditures (CAPEX) are constantly in use, always generating a return. The reality is otherwise: The capability to quickly decommission and reassign hardware outside the cloud does not exist, so costly resources can remain idle much of their useful lives.
In a cloud-based architecture, resources can be provisioned so quickly as to appear unlimited to the consumer. If there is one single hallmark trait of the cloud, it is likely this one: the ability to flatten the IT supply chain to provision applications in a matter of minutes instead of days or weeks.
Of these essential characteristics, the fifth—rapid elasticity, or the ability to quickly provision and deprovision—is perhaps the most critical in terms of cost savings relative to legacy architectures.
The NIST definition also includes the notion of service and deployment models. For a more complete picture of what is meant by the term cloud computing, it is necessary to spend a few minutes with these concepts.