3.2 Communication Middleware
A communication middleware framework provides an environment that enables two applications to set up a conversation and exchange data. Typically, this exchange of data will involve the triggering of one or more transactions along the way. Figure 3-3 shows how this middleware framework acts as an intermediary between the application and the network protocol.
In the very early days of distributed computing, the communication between two distributed programs was directly implemented based on the raw physical network protocol. Programmers were involved with acute details of the physical network. They had to create network packets, send and receive them, acknowledge transmissions, and handle errors. Therefore, a lot of effort was spent on these technical issues, and applications were dependent on a specific type of network. Higher-level protocols such as SNA, TCP/IP, and IPX provided APIs that helped reduce the implementation efforts and technology dependencies. They also provided abstraction and a more comfortable application development approach. These protocols enabled programmers to think less in terms of frames at OSI layer 2 or packets at layer 3 and more in terms of communication sessions or data streams. Although this was a significant simplification of the development of distributed applications, it was still a cumbersome and error-prone process. Programming at the protocol layer was still too low-level.
Figure 3-3 A communication middleware framework isolates the application developers from the details of the network protocol.
As the next evolutionary step, communication infrastructures encapsulated the technical complexity of such low-level communication mechanisms by insulating the application developer from the details of the technical base of the communication. A communication middleware framework enables you to access a remote application without knowledge of technical details such as operating systems, lower-level information of the network protocol, and the physical network address. A good middleware framework increases the flexibility, interoperability, portability, and maintainability of distributed applications. However, it is the experience of the recent two decades that the developer's awareness of the distribution is still crucial for the efficient implementation of a distributed software architecture. In the remainder of this chapter, we will briefly examine the most important communication middleware frameworks.
3.2.1 RPC
Remote Procedure Calls (RPCs) apply the concept of the local procedure call to distributed applications. A local function or procedure encapsulates a more or less complex piece of code and makes it reusable by enabling application developers to call it from other places in the code. Similarly, as shown in Figure 3-4, a remote procedure can be called like a normal procedure, with the exception that the call is routed through the network to another application, where it is executed, and the result is then returned to the caller. The syntax and semantics of a remote call remain the same whether or not the client and server are located on the same system. Most RPC implementations are based on a synchronous, request-reply protocol, which involves blocking the client until the server replies to a request.
The development of the RPC concept was driven by Sun Microsystems in the mid 1980s and is specified as RFC protocols 1050, 1057, and 1831. A communication infrastructure with these characteristics is called RPC-style, even if its implementation is not based on the appropriate RFCs.
Figure 3-4 RPC stubs and libraries enable location transparency, encapsulate the functional code for the RPC communication infrastructure, and provide a procedure call interface.
It is interesting to note that the need to provide platform-independent services was one of the main drivers in the development of RPC-style protocols. Particularly, the widely used SUN RPC protocol (RFC 1057) and its language bindings were developed to enable transparent access to remote file systems. NFS (RFC 1094) was implemented on top of SUN RPC and is one of the most popular ways to enable networked file system access in Unix-like environments.
At the end of the 1980s, DCE (Distributed Computing Environment) emerged as an initiative to standardize the various competing remote procedure call technologies. DCE also adds some higher-level services such as security and naming services. However, for reasons that were mainly political, DCE failed to win widespread industry support.
3.2.2 Distributed Objects
In the early 1990s, object-oriented programming emerged as a replacement for the traditional modular programming styles based on procedure or function calls. Consequently, the concept of Distributed Objects was invented to make this new programming paradigm available to developers of distributed applications.
Typically, Distributed Objects are supported by an Object Request Broker (ORB), which manages the communication and data exchange with (potentially) remote objects. ORBs are based on the concept of Interoperable Object References, which facilitate the remote creation, location, invocation, and deletion of objects (see Figure 3-5) often involving object factories and other helper objects. By doing so, ORB technology provides an object-oriented distribution platform that promotes object communication across machine, software, and vendor boundaries. ORBs provide location transparency and enable objects to hide their implementation details from clients.
The most common ORB implementations are CORBA, COM/DCOM, and RMI. While RMI is limited to Java and COM/DCOM is restricted to Microsoft platforms, CORBA spans multiple platforms and programming languages.
Figure 3-5 ORBs enable client applications to remotely create, locate, and delete server objects (e.g., through factory objects) and communicate with them through remote method invocations.
Today, most enterprise architectures embody object-oriented components (such as for the implementation of graphical user interfaces), but there is rarely an enterprise architecture that is built purely on object technology. In most cases, legacy applications based on programming languages such as COBOL or C are critical parts of an enterprise application landscape (some people prefer the term software assets over legacy software). It is therefore vital that a component that should be reused on an enterprise-wide level provides an interface that is suitable both for object-oriented and traditional clients. This is particularly true for the service of an SOA.
In this context, it is important to understand that object-oriented applications typically come with a fine-grained interaction pattern. Consequently, applying the object-oriented approach to building distributed systems results in many remote calls with little payload and often very complex interaction patterns. As we will see later, service-oriented systems are more data-centric: They produce fewer remote calls with a heavier payload and more simple interaction patterns.
Nevertheless, it is entirely possible to use ORB technology to implement a data-oriented, coarse-grained SOA interface. This leads to a very restricted application of the ORB technology and typically to an RPC-style usage of the ORB (see Figure 3-6).
Figure 3-6 An ORB can be used as a communication infrastructure for the implementation of an SOA. In this case, the advanced capabilities of the ORB to cope with multiple instances of remote objects are not used.
3.2.3 MOM
With the advent of IBM's MQSeries (now IBM WebSphere MQ) and Tibco Software's Rendezvous in the middle of the 1990s, Message-Oriented Middleware (MOM) technology became popular, and it has since become an integral part of the communication infrastructure landscape of large enterprises.
Although there are alternative implementation approaches (e.g., UDP multicast-based systems), the most common MOM implementations are based on the concept of message queuing. The two key components of a message queuing system are message and queue.
Typically, a message consists of a header and a payload. The structure of the header field is usually predefined by the system and contains network routing information. The payload is application-specific and contains business data, possibly in XML format. Messages typically relate to a specific transaction that should be executed upon receiving the message. Depending on the queuing system, the name of the transaction is either part of the header or the application payload.
The queue is a container that can hold and distribute messages. Messages are kept by the queue until one or more recipients have collected them. The queue acts as a physical intermediary, which effectively decouples the message senders and receivers. Message queues help to ensure that messages are not lost, even if the receivers are momentarily unavailable (e.g., due to a network disconnection). Email is a good example of the application of messaging concepts. The email server decouples sender and receiver, creating durable storage of email messages until the receiver is able to collect them. Email messages contain a header with information that enables the email to be routed from the sender's email server to the receiver's email server. In addition, email messages can be sent from a single sender to a single receiver (or to multiple recipients, through mailing lists for example), and one can receive email from multiple senders.
Message queuing systems provide similar concepts of connecting senders and receivers in different ways (one-to-one, one-to-many, many-to-many, etc.) (see Figure 3-7). The underlying technical concepts are typically referred to as point-to-point and publish-subscribe models. Point-to-point represents the most basic messaging model: One sender is connected to one receiver through a single queue. The publish-subscribe model offers more complex interactions, such as one-to-many or many-to-many. Publish-subscribe introduces the concept of topics as an abstraction, to enable these different types of interactions. Similar to point-to-point, a sender can publish messages with a topic without knowing anything about who is on the receiving side. Contrary to point-to-point communications, in the publish-subscribe model, the message is distributed not to a single receiver, but to all receivers who have previously indicated an interest in the topic by registering as subscribers.
Figure 3-7 MOM decouples the creators and consumers of messages providing concepts such as point-to-point messaging and publish-subscribe
Although the basic concepts of message queuing systems (message, queue, and topic) are relatively simple, a great deal of complexity lies in the many different ways that such a system can be configured. For example, most message queuing systems enable the interconnection of multiple physical queues into one logical queue, with one queue manager on the sender side and another on the receiver side, providing better decoupling between sender and receiver. This is similar to email, where senders transmit email to their own mail server, which in turn routes the mail to the receiver's mail server, a process that is transparent to the users of email. Furthermore, queues can be interconnected to form networks of queues, sometimes with intelligent routing engines sitting between the different queues, creating event-driven applications that employ logic similar to that of Petri nets [Rei1992].
Message queuing systems typically also provide a number of different service levels (QoSquality of service), either associated with specific messages or specific queues. These service levels determine, for example, the transactional capability of the message, the send/receive acknowledge modes, the number of allowable recipients, the length of time a message is valid, the time at which the message was sent/received, the number of times to attempt redelivery, and the priority of the message, relative to other messages.
Generally, MOM encourages loose coupling between message consumers and message producers, enabling dynamic, reliable, flexible, high-performance systems to be built. However, one should not underestimate the underlying complexity of ensuring that MOM-based systems work efficiently, a feature that is not often visible at the outset.
3.2.4 Transaction Monitors
With the rising demand for user-friendly online applications in the 1980s, transaction monitors1 became popular. They provide facilities to run applications that service thousands of users. It is the responsibility of a transaction monitor to efficiently multiplex the requirements for computing resources of many concurrent clients to resource pools. Most importantly, they manage CPU bandwidth, database transactions, sessions, files, and storage. Today, transaction monitors also provide the capability to efficiently and reliably run distributed applications. Clients are typically bound, serviced, and released using stateless servers that minimize overhead by employing a non-conversational communication model. Furthermore, up-to-date transaction monitors include services for data management, network access, authorization, and security.
Popular examples of transaction monitors are CICS (Customer Information Control System), IMS (Information Management System), Encina, or Tuxedo, which all provide facilities for remote access. A variety of different distribution concepts can be found to support the particular strengths of respective transaction monitors.
Although it is not the intention of this book to discuss current technology and products in detail, a short glance at IBM's CICS and IMS can provide useful insights. CICS is a time-sharing system. Although more than 20 years old, it is still a key element of IBM's enterprise product strategy. It is probably today's most important runtime environment for mission-critical enterprise applications. Even today, there are new applications developed for CICS. Native protocols such as SNA or TCP/IP and various communication infrastructures such as object brokers and messaging middleware can be used to integrate CICS applications with non-CICS applications [Bras2002]. One of the most common ways to connect to a CICS application is CICS's External Call Interface (ECI). The ECI basically provides an RPC-like library that enables remote applications to invoke CICS transaction programs. Based on the ECI, the CICS Transaction Gateway (CTG) provides an object-oriented interface for Java. Contrary to the CICS time-sharing concept, its predecessor IMS was based on processing queues. Although IMS appears to be archaic, it is still important in practice due to its base of installed transaction programs. There are also many different ways to remotely invoke an IMS transaction program [Bras2002]. The most popular ways are IMS Connect and MQSeries OTMA-Bridge [Lon1999]. While IMS Connect imposes an RPC-like access to IMS transaction programs, the MQSeries OTMA-Bridge is based on the MOM concept.
3.2.5 Application Servers
With the booming demand for Web applications in the dot-com era of the late 1990s, the application server became extremely popular. An application server mediates between a Web server and backend systems, such as databases or existing applications. Requests from a client's Web browser are passed from the Web server to the application server. The application server executes code that gathers the necessary information from other systems and composes an HTML reply, which is returned to the client's browser.
An application server can be simple, such as Microsoft ASP (Active Server Pages), which comes with IIS (Internet Information Server), or they can be complex and expensive systems that implement load balancing, data management, caching, transaction management, and security.
The basic functions of an application server can be described as hosting components, managing connectivity to data sources, and supporting different types of user interfaces, such as thin Web interfaces or fat client applications. Taking a closer look at the basic mechanisms of an application server, one can sometimes get the impression that not much has changed from the days of IBM CICS and VT3270 terminals, only that these days, the user interfaces are more colorful.
When looking at the high end of application servers, notably Microsoft .NET Server, BEA WebLogic, and IBM WebSphere, it can sometimes be difficult to find out exactly what is still part of the core application server functionality because these companies have started to use their respective brands to describe a very broad array of products. For example, J2EE, the application server framework from the non-Microsoft camp, started with core application server functionality including JSP (Java Server Pages) for the dynamic generation of HTML and EJB (Enterprise Java Beans) for managing and hosting more complex software components. Within a couple of years, J2EE has become a complex and sophisticated framework (see Figure 3-8).
Figure 3-8 J2EE comprises a variety of standards.