The Problem with Today’s Systems
With increasing operational requirements unaccompanied by linear growth in IT staffing levels, organizations must continually find ways to streamline administration through tools and automation. Today’s IT systems are prone to a number of problems from the perspective of service management, including
- System unavailability
- Configuration “shift and drift”
- System isolation
- Lack of historical information
- Not enough expertise
- Missing incidents and information
- Lack of process consistency
- Not meeting service level expectations
This list should not be surprising, as these problems manifest themselves in all IT shops with varying degrees of severity. In fact, Forrester Research estimates that 82% of larger shops are pursuing service management, and 67% plan to increase Windows management. The next sections look at the issues.
Why Do Systems Go Down?
Let’s start with examining reasons why systems go down. Figure 1.1 illustrates reasons for system outages, based on the authors’ personal experiences and observations, and the following list describes some of these reasons.
FIGURE 1.1 Causes of system outages.
- Software errors: Software is responsible for somewhat less than half the errors. These errors include software coding errors, software integration errors, data corruption, and such.
- User errors: End users and operators cause a little less than half the errors. This includes incorrectly configuring systems, failing to catch warning messages that turn into errors, accidents, unplugging the power cord, and so on.
- Miscellaneous errors: This last category is fairly small. Causes of problems include disk crashes, power outages, viruses, natural disasters, and so on.
As Figure 1.1 demonstrates, the vast majority of failures are due to software level errors and user errors. It is surprising to note that hardware failures account for only a small percentage of problems, which is a tribute to modern systems such as Redundant Array of Independent Disks (RAID), clustering, and other mechanisms deployed to provide server and application redundancy.
The numbers show that to reduce system downtime, which affects user satisfaction and productivity, you need to attack the software and user error components of the equation. That is where you get the most “bang for the buck.”
Configuration “Shift and Drift”
Even IT organizations with well-defined and documented change management policies can have procedures that fall short of perfection. Unplanned and unwanted changes frequently find their way into production, sometimes as an unintended side effect of an approved, scheduled change.
You may be familiar with an old philosophical saying: If a tree falls in a forest and no one is around to hear it, does it make a sound?
Here’s the change management equivalent: If a change is made on a system and no one is around to hear it, does identifying it make a difference?
The answer to this question absolutely is “yes”; every change to a system can potentially affect its functionality or security, or that system’s adherence to corporate or regulatory compliance.
As an example, adding a feature to a web application component may affect the application binaries by potentially overwriting files or settings that were replaced with a critical security patch. Or perhaps the engineer implementing the change sees a setting he thinks is misconfigured and decides to just “fix” it while already working on the system. In an e-commerce scenario, where sensitive customer data is involved, this could have potentially devastating consequences, not to mention that troubleshooting something you don’t know has changed is like looking for the proverbial needle in a haystack.
At the end of the day, your management platform must incorporate a strong element of baseline configuration monitoring and enforcement to ensure configuration standards are implemented and maintained with the required consistency.
System Isolation
Microsoft Windows Server and the applications running run on it expose a wealth of information with event logs, performance counters, and application-specific logs. However, this data is isolated and typically server-centric—making it difficult to determine what and where a problem really is. To get a handle on your systems, you need to take actions to prevent the situation shown in Figure 1.2, where you have multiple islands of information.
FIGURE 1.2 Multiple islands of information.
Places where you find isolated information include data and statistics stored in various databases, event logs, and performance counters. In addition, consultants, engineers, and subject matter experts have information locked up in their heads or written down on whiteboards and paper napkins. Other areas include undocumented changes, undocumented service requests, incidents that are similar but not shown as related to each other to help determine the root cause of problems, and decentralized asset information.
Each of these is as much an island of information as the statistics and data stored on any computer.
Although system information is captured in various ways, it is typically lost over time, and the information is not centralized or reviewed regularly. Most application information is also server-centric, typically stored on the server and specific to the server where that application resides. There is no built-in, system-wide, cross-system view of critical information.
Incidents, problems, service requests, and change requests are recurring events throughout IT. Some organizations have this data within separate systems, without a single point of visibility. When data is stored on separate systems, it is isolated, and comprehensive reporting becomes difficult. This is also true for servers, IIS websites, SQL instances, and other objects; data regarding these systems are not typically stored in a central location to enable good asset management inventory. As cloud scenarios continue to be adopted, tracking these assets becomes complex if they are not managed centrally. Islands of information, where data is stranded on any given island, make it difficult to get to needed information in a timely or effective manner. Not having that information can make managing user satisfaction a difficult endeavor.
System Center can help to alleviate these islands of information. Operations Manager can track monitoring information in a single place. You can bring Operations Manager alert data into Service Manager to populate incidents and problems as well as be a starting point for change requests. Monitored objects from Operations Manager and information from Configuration Manager can also be brought into Service Manager to populate the CMDB. Service Manager’s CMDB and data warehouse have the ability to alleviate these islands of information and bring them into a single point for reporting.
Lack of Historical Information
Sometimes you may capture information about problems, but are unable to look back in time to see whether this is an isolated instance or part of a recurring pattern. An incident can be a one-time blip or can indicate an underlying issue; not having a historical context makes it difficult to understand the significance of any particular incident. Consider the following example:
A company retains a consultant to determine why a database application has performance problems. To prove there is an issue, the in-house IT staff points out that users are complaining about performance but the memory and CPU on the database server are only 50% utilized. By itself, this does not say what the problem might be. It could be that memory and the CPU are normally 65% utilized and the problem is really a network utilization problem, which in turn is reducing the load on the other resources. The problem could be that the application is poorly written. A historical context could provide useful information.
As an expert, the consultant would develop a hypothesis and test it, which takes time and costs money. Rather than trying to solve a problem, many IT shops just throw more hardware at it—only to find that this does not necessarily improve performance. Utilizing historical records could show that system utilization actually dropped at the same time that users started complaining and the problem is actually elsewhere.
Lack of Expertise
Do you lack the in-house expertise needed to support users calling the service desk? Is your documentation inadequate and you don’t have the knowledge to keep it current? Do you pay an arm and a leg to have contractors manage user support and expectations?
If the expertise you need is not available for those areas needing attention, you can incur additional costs and even potential downtime. This can translate to loss of user productivity, system outages, and ultimately higher operational costs if emergency measures are required to resolve problems.
Missing Incidents and Information
Sometimes problems are detected by what occurred elsewhere. The information reported to operations and change management systems can affect system availability and user satisfaction. If that information is not available to the service desk, it might as well be an isolated island of information.
One of the primary tasks of the service desk team is incident detection and recording. A complete service management solution must be able to capture information occurring throughout the data center, generating trouble tickets as appropriate, managing user expectations as necessary, and providing efficient and responsive support for end users. The CMDB must provide the information required for analysts to resolve issues quickly. Without the capability to incorporate information throughout the IT organization, the service desk is severely handicapped in the quality of support it can provide to its customers.
Reported incidents can also disappear from sight by not being assigned to an owner. A service management solution must be able to track information from the time it enters the system until the problem is resolved and the issue closed.
Lack of Process Consistency
Many IT organizations are unorganized in terms of identifying and resolving problems. Using standard procedures and a methodology helps to minimize risk and solve issues faster.
A methodology is a framework of processes and procedures used by those who work in a discipline. You can look at a methodology as a structured process that defines the who, what, where, when, and why of your operations, and the procedures to use when defining problems, solutions, and courses of action.
When employing a standard set of processes, it is important to ensure that the framework adopted adheres to accepted industry standards or best practices. Employing a standard set of processes also considers business requirements, to ensure continuity between expectations and the services delivered by the IT organization. Consistent use of a repeatable and measurable set of practices allows organizations to quantify their progress more accurately to facilitate adjustment of processes as necessary to improve future results. The most effective IT organizations build an element of self-examination into their service management strategy to ensure processes can be incrementally improved or modified to meet the changing needs of the business.
With IT’s continually increased role in running successful business operations, having a structured and standard way to define IT operations that are aligned to the needs of the business is critical to meet the expectations of business stakeholders. This alignment results in improved business relationships, where business units engage IT as a partner in developing and delivering innovations to drive business results.
Not Meeting Service Level Expectations
Customer satisfaction is all about perception. Customer satisfaction is not necessarily about objective quality of service; it is how your customer (the end user and the business) sees that quality. There will be times that your users see the service as much better than it is, and also times when that service is perceived as much worse than it is in reality—usually due to bad communication, or from isolated cases that have high visibility.
Keeping your end users satisfied is about providing excellent services, but it is also about managing their expectations about what excellent services actually are.
End User Satisfaction = Perception - Expectation
The expectation part of this equation is managed by your service level agreements and how well you meet them. The goal of service level management is ensuring that the agreed level of IT services are provided, and that any future services will be delivered as agreed upon. A service level agreement (SLA) is just a document; service level management—the process that creates that document—helps IT and the business you are supporting to understand each other.
If you have not established expectations, you will not be able to satisfy your end users as to the quality of the service IT is providing, and you will not be perceived as a valuable part of the business.
What It’s All About
It can be intimidating when you consider the fact that the problems described to this point could happen even in an ostensibly “managed” environment. However, these examples serve to illustrate that the very processes used for service management must themselves be reviewed periodically and updated, so they might accommodate changes in tools and technologies employed from the desktop to the data center. By not correlating data across systems, being aware of potential issues, maintaining a history of past performance and problems, and so on, IT shops open themselves up to putting out fires and fighting time bombs that could be prevented by using a more systematic approach to service management, which is described in the next section.