- Determining Where to Start
- Defining Services and Interfaces
- Service to Service Communication
- Monolith to Microservices
- Flak.io e-Commerce Sample
- Summary
Defining Services and Interfaces
Before we can get to work building our services, we need to define what those services are and their interfaces. When we begin decomposing the services, one of the questions that comes to mind is, how big should our services be? Deciding on the number and size of services can be a frustrating task. There are no specific rules for the most optimal size. Some combination of techniques can be used to help determine if we should consider further decomposing a service. The number of files or lines of code can provide some indication that we might want to consider breaking a service up. Another reason might be that a service is providing too many responsibilities, making it hard to reason about, test, and maintain. Team organization, which could require changes to the service partition and the team structure alike, can be a factor. Mixed service types, such as handling authentication and serving web pages, might be another good reason to further decompose a service.
An understanding of the business domain will be important here. The more we know about the capabilities and features of the application, the better we can define the services and their interfaces used to compose that application.
In the end, we want a number of small, loosely coupled services with closely related behavior located together in a service. Loosely coupled services enable us to change one service without needing to change another. It’s very important that we are able to update our services independently. We need to take some care when planning integrations between services, so that we do not inadvertently introduce coupling between the services. Accidental coupling could be caused by factors like shared code, rigid protocols, models, shared database, no versioning strategy, or exposing internal implementation details.
Decomposing the Application
When approaching an existing or new application, we will need to determine where the seams are and how to decompose the application into smaller services—among the more challenging and important things to get right in a microservices architecture. We want to make sure to minimize coupling, and that we keep closely related things together. As with good component design, we strive for high cohesion and low coupling in our service design.
Coupling is the degree of interdependence, and low coupling means we have a very small number of interdependencies. This is important for maintaining an independent lifecycle for our services. To keep coupling low, we need to carefully consider how we partition the application, as well as how we integrate the services.
An important thing to consider with regard to a microservices architecture is that when we are discussing high cohesion, we generally mean functional cohesion. Cohesion is the degree by which something is related, and if the degree of cohesion is high, then it’s very closely related. Things that have very high cohesion will generally need to change together. If we are making a change to a service we want to be able to make that change in only the one service and release it. We should not have to coordinate changes and releases across multiple services. By keeping closely related things together in a service it can be much easier to accomplish this. This will also tend to reduce chattiness across the services. A key difference with a microservices architectures when compared to traditional SOA is the importance placed on functional over logical cohesion.
We need to identify the boundaries in the application that we will use to define the individual services and their interfaces. As we mentioned previously, those boundaries should ensure closely related things are grouped together and unrelated things are someplace else. Depending on the size and complexity of the application this can be a matter of identifying the nouns and verbs used in the application and grouping them.
We can use Domain-Driven Design (DDD) concepts to help us define the boundaries within our application that we will use to break it down in to individual services. A useful concept in DDD is the bounded context. The context represents a specific responsibility of the application which can be used in decomposing and organizing the problem space. It has a very well-defined boundary which is our service interface, or API.
When identifying bounded contexts in our domain, think about the business capabilities and terminology. Both will be used to identify and validate the bounded contexts in the domain. A deep dive into Domain-Driven Design (DDD) is out of the scope of this book, but there are a number of fantastic books in the market that cover this in great depth. I recommend “Domain-Driven Design” by Eric Evans and “Patterns, Principles, and Practices of Domain-Driven Design” by Scott Millet and Nick Tune. Defining these boundaries in a large complex domain can be challenging, especially if the domain is not well understood.
We can further partition components within a bounded context into their own services and still share a database. The partitioning can be to meet some nonfunctional requirements like scalability of the application or the need to use a different technology for a feature we need to implement. For example, we might have decided our product catalog will be a single service, and then we realized the search functionality has much different resource and scale requirements than the rest of the application. We can decide to further partition that feature into its own individual service for reasons of security, availability, management, or deployment.
Service Design
When building a service, we want to ensure our service does not become coupled to another team’s service, requiring a coordinated release. We want to maintain our independence. We also want to ensure we are not breaking our consumer when deploying updates, including breaking changes. To achieve this, we will need to carefully design the interfaces, tests, and versioning strategies, and document our services while we do so.
When defining the interfaces, we need to ensure we are not exposing unnecessary information in the model or internals of the services. We cannot make any assumptions of how the data being returned is used, and removing a property or changing the name of an internal property that is inadvertently exposed in the API can break a consumer. We need to be careful not to expose more than what is needed. It’s easier to add to the model that’s returned than it is to remove or change what is returned.
Integration tests can be used when releasing an update to identify any potential breaking changes. One of the challenges with testing our services is that we might not be able to test with the actual versions that will be used in production. Consumers of our service are constantly evolving their services, and we can have dependencies on services that have dependencies on others. We can use consumer-driven contracts, mocks, and stubs for testing consumer services and service dependencies. This topic is covered in Chapter 6, “DevOps and Continuous Delivery.”
There will come a time when we need to introduce breaking changes to the consumer and when we do this, having a versioning strategy in place across the services will be important.
There are a number of different approaches to versioning services. We could put a version in the header, query string, or simply run multiple versions of our service in parallel. If we are deploying multiple versions in parallel, be aware of the fact that this will involve maintaining two branches. A high-priority security update might need to be applied to multiple versions of a service. The Microsoft Patterns & Practices team has provided some fantastic guidance and best practices for general API design and versioning here: https://azure.microsoft.com/en-us/documentation/articles/best-practices-api-design/.
It’s also important that the services are documented. This will not only help services get started consuming the API quickly, but can also provide best practices for consuming and working with the API. The API can include batch features that can be useful for reducing chattiness across services, but if the consumer is not aware these batch features do not help. Swagger (http://swagger.io) is a tool we can use for interactive documentation of our API, as well as client SDK generation and discoverability.