16.3 Stream Basics
In this section we introduce the terminology and the basic concepts required to work with streams, and provide an overview of the Stream API.
An example of an aggregate operation is filtering the elements in the stream according to some criteria, where all the elements of the stream must be examined in order to determine the result. We saw a simple example of filtering a list in the previous section. Figure 16.1a shows an example of a stream-based solution for filtering a list of CDs to find all pop music CDs. A stream of CDs is created at (1). The stream is filtered at (2) to find pop music CDs. Each pop music CD that is found is accumulated into a result list at (3). Note how the method calls are chained, which is typical of processing elements in a stream. We will fill in the details as we use Figure 16.1 to introduce the basics of data processing using streams.
Figure 16.1 Data Processing with Streams
A stream must be built from a data source before operations can be performed on its elements. Streams come in two flavors: those that process object references, called object streams; and those that process numeric values, called numeric streams. In Figure 16.1a, the call to the stream() method on the list cdList creates an object stream of CD references with cdList as the data source. Building streams from data sources is explored in §16.4, p. 890.
Stream operations are characterized either as intermediate operations (§16.5, p. 905) or terminal operations (§16.7, p. 946). In Figure 16.1a, there are two stream operations. The methods filter() and collect() implement an intermediate operation and a terminal operation, respectively.
An intermediate operation always returns a new stream—that is, it transforms its input stream to an output stream. In Figure 16.1a, the filter() method is called at (2) on its input stream, which is the initial stream of CDs returned by the stream() method at (1). The method reference CD::isPop passed as an argument to the filter() method implements the Predicate<CD> functional interface to filter the CDs in the stream. The filter() method returns an output stream, which is a stream of pop music CDs.
A terminal operation either causes a side effect or computes a result. The method collect() at (3) implements a terminal operation that computes a result (§16.7, p. 964). This method is called on the output stream of the filter() operation, which is now the input stream of the terminal operation. Each CD that is determined to be a pop music CD by the filter() operation at (2) is accumulated into a result list by the toList() method at (3). The toList() method (p. 972) creates an empty list and accumulates elements from the input stream—in this case, CDs with pop music.
The code below splits the chain of method calls in Figure 16.1a, but produces the same result. The code explicitly shows the streams that are created and how an operation is invoked on the stream returned by the preceding operation. However, this code is a lot more verbose and not as easy to read as the code in Figure 16.1a— so much so that it is frowned upon by stream aficionados.
Stream<CD> stream1 = cdList.stream(); // (1a) Stream creation. Stream<CD> stream2 = stream1.filter(CD::isPop); // (2a) Intermediate operation. List<CD> popCDs2 = stream2.toList(); // (3a) Terminal operation.
Composing Stream Pipelines
Stream operations can be chained together to compose a stream pipeline in which stream components are specified in the following order:
An operation on a data source for building the initial stream
Zero or more intermediate operations to transform one stream into another
A single mandatory terminal operation in order to execute the pipeline and produce some result or side effect
Composition into a pipeline is possible because stream creation and intermediate operations return a stream, allowing method calls to be chained as we have seen in Figure 16.1a.
The chain of method calls in Figure 16.1a forms the stream pipeline in Figure 16.1b, showing the components of the pipeline. A stream pipeline is analogous to an assembly line, where each operation depends on the result of the previous operation as parts are assembled. In a pipeline, an intermediate operation consumes elements made available by its input stream to produce elements that form its output stream. The terminal operation produces the final result from its input stream. Creating a stream pipeline can be regarded as a fusion of stream operations, where only a single pass of the elements is necessary to process the stream.
Stream operations are typically customized by behavior parameterization that is specified by functional interfaces and implemented by lambda expressions. That is why understanding built-in functional interfaces and writing method references (or their equivalent lambda expressions) is essential. In Figure 16.1a, the Predicate argument of the filter() operation implements the behavior of the filter() operation.
A stream pipeline formulates a query on the elements of a stream created from a data source. It expresses what should be done to the stream elements, and not how it should be done—analogous to a database query. One important advantage of composing stream pipelines is that the compiler can freely optimize the operations— for example, for parallel execution—as long as the same result is guaranteed.
Executing Stream Pipelines
Apart from the fact that an intermediate operation always returns a new stream and a terminal operation never does, another crucial difference between these two kinds of operations is the way in which they are executed. Intermediate operations use lazy execution; that is, they are executed on demand. Terminal operations use eager execution; that is, they are executed immediately when the terminal operation is invoked on the stream. This means the intermediate operation will never be executed unless a terminal operation is invoked on the output stream of the intermediate operation, whereupon the intermediate operations will start to execute and pull elements from the stream created on the data source.
The execution of a stream pipeline is illustrated in Figure 16.1c. The stream() method just returns the initial stream whose data source is cdList. The CD objects in cdList are processed in the stream pipeline in the same order as in the list, designated as cd0, cd1, cd2, cd3, and cd4. The way in which elements are successively processed by the operations in the pipeline is shown horizontally for each element. The execution of the pipeline only starts when the terminal operation toList() is invoked, and proceeds as follows:
cd0 is selected by the filter() operation as it is a pop music CD, and the collect() operation places it in the list created to accumulate the results.
cd1 is discarded by the filter() operation as it is not a pop music CD.
cd2 is selected by the filter() operation as it is a pop music CD, and the collect() operation places it in the list created to accumulate the results.
cd3 is discarded by the filter() operation as it is not a pop music CD.
cd4 is discarded by the filter() operation as it is not a pop music CD.
In Figure 16.1c, when the stream is exhausted, execution of the collect() terminal operation completes and execution of the pipeline stops. Note that there was only one pass over the elements in the stream. From Figure 16.1c, we see that the resulting list contains only cd0 and cd2, which is the result of the query. Printing the resulting popCDs list produces the following output:
[<Jaav, "Java Jive", 8, 2017, POP>, <Funkies, "Lambda Dancing", 10, 2018, POP>]
A stream is considered consumed once a terminal operation has completed execution. A stream that has been consumed cannot be reused, and any attempt to use it will result in a nasty java.lang.IllegalStateException.
The code presented in this subsection is shown in Example 16.2.
Example 16.2 Data Processing Using Streams
import java.util.List; import java.util.stream.Stream; public class StreamPipeLine { public static void main(String[] args) { List<CD> cdList = List.of(CD.cd0, CD.cd1, CD.cd2, CD.cd3, CD.cd4); // (A) Query to create a list of all CDs with pop music. List<CD> popCDs = cdList.stream() // (1) Stream creation. .filter(CD::isPop) // (2) Intermediate operation. .toList(); // (3) Terminal operation. System.out.println(popCDs); // (B) Equivalent to (A). Stream<CD> stream1 = cdList.stream(); // (1a) Stream creation. Stream<CD> stream2 = stream1.filter(CD::isPop);// (2a) Intermediate operation. List<CD> popCDs2 = stream2.toList(); // (3a) Terminal operation. System.out.println(popCDs2); } }
Output from the program:
[<Jaav, "Java Jive", 8, 2017, POP>, <Funkies, "Lambda Dancing", 10, 2018, POP>] [<Jaav, "Java Jive", 8, 2017, POP>, <Funkies, "Lambda Dancing", 10, 2018, POP>]
Comparing Collections and Streams
It is important to understand the distinction between collections and streams. Streams are not collections, and vice versa, but a stream can be created with a collection as the data source (p. 897).
Collections are data structures that can be used to store and retrieve elements. Streams are data structures that do not store their elements, but process them by expressing computations on them through operations like filter() and collect().
Typically, operations are provided to add or remove elements from a collection. However, no elements can be added or removed from a stream—that is, streams are immutable. Because of their functional nature, if a stream operation does remove or discard an element in a stream, a new stream is returned with the remaining elements. A stream operation does not mutate its data source.
A collection can be used in the program as long as there is a reference to it. However, a stream cannot be reused once it is consumed. It must be re-created on the data source in order to be reused.
Operations on a collection are executed immediately, whereas streams can define intermediate operations that are executed on demand—that is, by lazy execution.
Collections are iterable, but streams are not iterable. Streams do not implement the Iterable<T> interface, and therefore, a for(:) loop cannot be used to iterate over a stream.
Mechanisms for iteration over a collection are based on an iterator defined by the Collection interface, but must be explicitly used in the program to iterate over the elements; this is called external iteration. On the other hand, iteration over stream elements is implicitly handled by the API; this is called internal iteration and it occurs when the stream operations are executed.
Collections have a finite size, but streams can be unbounded; these are called infinite streams. Special stream operations, such as limit(), exist to compute with infinite streams.
Some collections, such as lists, allow positional access of their elements with an index. However, this is not possible with streams, as only aggregate operations are permissible.
Note also that streams supported by the Stream API are not the same as those supported by the File I/O APIs (§20.1, p. 1233).
Overview of API for Data Processing Using Streams
In this subsection we present a brief overview of new interfaces and classes that are introduced in this chapter. We focus mainly on the Stream API in the java.util.stream package, but we also discuss utility classes from the java.util package.
The Stream Interfaces
Figure 16.2 shows the inheritance hierarchy of the core stream interfaces that are an important part of the Stream API defined in the java.util.stream package. The generic interface Stream<T> represents a stream of object references—that is, object streams. The interfaces IntStream, LongStream, and DoubleStream are specializations to numeric streams of type int, long, and double, respectively. These interfaces provide the static factory methods for creating streams from various sources (p. 890), and define the intermediate operations (p. 905) and the terminal operations (p. 946) on streams.
The interface BaseStream defines the basic functionality offered by all streams. It is recursively parameterized with a stream element type T and a subtype S of the BaseStream interface. For example, the Stream<T> interface is a subtype of the parameterized BaseStream<T, Stream<T>> interface, and the IntStream interface is a subtype of the parameterized BaseStream<Integer, IntStream> interface.
All streams implement the AutoCloseable interface, meaning they should be closed after use in order to facilitate resource management during execution. However, this is not necessary for the majority of streams. Only resource-backed streams need to be closed—for example, a stream whose data source is a file. Such resources are best managed automatically with the try-with-resources statement (§7.7, p. 407).
Figure 16.2 The Core Stream Interfaces
The Collectors Class
A collector encapsulates the machinery required to perform a reduction operation (p. 978). The java.util.stream.Collector interface defines the functionality that a collector must implement. The java.util.stream.Collectors class provides a rich set of predefined collectors for various kinds of reductions.
The Optional Classes
Instances of the java.util.Optional<T> class are containers that may or may not contain an object of type T (p. 940). An Optional<T> instance can be used to represent the absence of a value of type T more meaningfully than the null value. The numeric analogues are OptionalInt, OptionalLong, and OptionalDouble that can encapsulate an int, a long, or a double value, respectively.
The Numeric Summary Statistics Classes
Instances of the IntSummaryStatistics, LongSummaryStatistics, and DoubleSummary-Statistics classes in the java.util package are used by a group of reduction operations to collect summarizing statistics like the count, sum, average, min, and max of the values in a numeric stream of type int, long, and double, respectively (p. 974, p. 1001).