- 5.1 Framing the Problem
- 5.2 Activity-oriented Teams
- 5.3 Shared Services
- 5.4 Cross-functional Teams
- 5.5 Cross-functionality in Other Domains
- 5.6 Migrating to Cross-functional Teams
- 5.7 Communities of Practice
- 5.8 Maintenance Teams
- 5.9 Outsourcing
- 5.10 The Matrix: Solve It or Dissolve It
- 5.11 Summary of Insights
- 5.12 Summary of Actions
5.3 Shared Services
Shared services are similar to activity-oriented teams except that they are usually shared across unrelated business outcomes. All shared services are activity-oriented teams, but all activity-oriented teams aren’t shared services. For example, if a product development team is split into a team of developers and a team of testers with a manager per team, they are activity-oriented teams but not shared services. Typical examples of shared services include IT helpdesk, software product level-2 support (single team serving multiple products), internal private cloud team, and call centers. Although they play a crucial supporting role in the realization of business outcomes, they are often treated and managed purely as cost centers. Shared services cannot be totally avoided, but they shouldn’t be encouraged as a way to do more with less. It is usually counterproductive to have enterprise architecture, UX, software testing, IT operations (e.g., for a SaaS product) or even product marketing and sales as shared services. Ethar Alali has written a great two-part article explaining the drawbacks of shared services and activity-oriented teams with a non-IT example.7
5.3.1 Shared Services Lose Purpose
When several teams of developers share a common team of testers, what is the purpose with which the testers identify? The developer teams each have a product to develop; their purpose is a successful product or at least a successful release. The shared testing team’s purpose often degenerates to that of being an efficient provider of testing services with allegiance to no particular product.
It is important to recognize this aspect of shared services. By definition, shared services are used by teams responsible for different business outcomes. The shared team itself isn’t responsible for those outcomes. It is no surprise then that we sometimes get the feeling of dealing with mercenaries when interacting with a shared service team. They don’t seem to have their skin in the game.
Shared services struggle to find purpose. An organization design that aims for conditions of autonomy, mastery, and purpose should strive to minimize shared services and eliminate them from mission-critical value streams.
5.3.2 Reducing Friction in Shared Service Interfaces
Interteam collaboration typically requires following a communication protocol enforced by a work tracking tool or a single point of contact. It means meetings between team representatives with documented minutes of meetings. Feedback loops lengthen, reducing our ability to fail-fast (Section 2.4.1). Team managers try to showcase their team’s performance with team-level metrics. Incoming work gets queued and prioritized based on some centrally conceived criteria. Dependent teams get frustrated with turnaround times and attempt priority escalations.
Here is an example of how a communication protocol designed for cost-efficiency ends up affecting responsiveness. It is typical for IT support to use ticketing systems. It helps the IT support manager track the workload. Some employees who are friends with the engineers in IT support tend to request help directly via chat. This is understandably discouraged because it flies under the manager’s radar and does not leave an audit trail. Once a ticket is assigned to an engineer, she is expected to carry out the task and put the ticket in the state of “completed subject to customer confirmation.” Sometimes, the ticket lacks some information or it needs troubleshooting on the requestor’s computer. Depending on the nature of the ticket, she may choose to:
- Reply to the ticket asking for more information and put the ticket in a state of “awaiting customer response” or
- Get in touch with the requester via phone/chat, obtain the needed information, carry out the task, and close the ticket.
The first option is probably more efficient from an IT support perspective. She doesn’t have to look up the requestor’s phone number and there is no wasted communication effort if the requestor is not available at that moment. Besides, everything is written down and recorded. The second option is more responsive from the requestor’s perspective. It feels less like dealing with a bureaucracy.
The first option can get worse for the requestor if the ticket is reassigned to a different engineer every time the requestor responds. We experience this first-hand when trying to sort out a nontrivial problem with our bank or telecom provider’s call center. We are expected to explain the whole problem all over again to a new agent. Being able to switch agents freely on a ticket helps maximize agent utilization. Unfortunately, it also maximizes customer frustration.
Designers of the system may argue that the history of the ticket is recorded, and so the customer should not have to repeat it. However, the recorded history is seldom self-explanatory. Besides, an agent new to a ticket would much rather hear it again first-hand than having to read through and assimilate the record.
What if the situation is level-3 commercial product support for external customers? Getting in touch with the requestor might be unrealistic, but we could at least have the same person responding until the ticket is resolved. What if, in order to provide 24X7 support, level-3 people are located in different time zones? Now we can’t help agent switching, can we? Well, at least we can avoid agent switching within a time zone on a given ticket.