Register your product to gain access to bonus material or receive a coupon.
The hands-on guide to high-performance coding and algorithm optimization.
This hands-on guide to software optimization introduces state-of-the-art solutions for every key aspect of software performance both code-based and algorithm-based.
Two leading HP software performance experts offer comparative optimization strategies for RISC and for the new Explicitly Parallel Instruction Computing (EPIC) design used in Intel IA-64 processors. Using many practical examples, they offer specific techniques for:
Whether you're a developer, ISV, or technical researcher, if you need to optimize high-performance software on today's leading processors, one book delivers the advanced techniques and code examples you need: Software Optimization for High Performance Computing.
Software Optimization for High Performance Computing: Creating Faster Applications
(NOTE: Each chapter begins with an Introduction and concludes with a Summary.)
1. Introduction.
Hardware Overview-Your Work Area. Software Techniques-The Tools. Applications-Using the Tools.
I. HARDWARE OVERVIEW-YOUR WORK AREA.
2. Processors: The Core of High Performance Computing.Types. Pipelining. Instruction Length. Registers. Functional Units. CISC and RISC Processors. Vector Processors. VLIW.
3. Data Storage.Caches. Virtual Memory Issues. Memory. Input/Output Devices. I/O Performance Tips for Application Writers.
4. An Overview of Parallel Processing.Parallel Models. Hardware Infrastructures for Parallelism. Control of Your Own Locality.
II. SOFTWARE TECHNIQUES-THE TOOLS.
5. How the Compiler Can Help and Hinder Performances.Compiler Terminology. Compiler Options. Compiler Directives and Pragmas. Metrics. Compiler Optimizations. Interprocedural Optimization. Change of Algorithm.
6. Predicting and Measuring Performance.Timers. Profilers. Predicting Performance.
7. Is High Performance Computing Language DependentPointers and Aliasing. Complex Numbers. Subroutine or Function Call Overhead. Standard Library Routines. Odds and Ends.
8. Parallel Processing-An Algorithmic Approach.Process Parallelism. Thread Parallelism. Parallelism and I/O. Memory Allocation, ccNUMA, and Performance. Compiler Directives. The Message Passing Interface (MPI).
III. APPLICATIONS-USING THE TOOLS.
9. High Performance Libraries.Linear Algebra Libraries and APIs. Signal Processing Libraries and APIs. Self-Tuning Libraries. Commercial Libraries.
10. Mathematical Kernels: The Building Blocks of High Performance.Building Blocks. BLAS. Scalar Optimization. Vector Operations. Matrix Copy and Transpose. BLAS and Performance. Winograd's Matrix-Matrix Multiplication. Complex Matrix-Matrix Multiplication with Three Real Multiplications. Strassen's Matrix-Matrix Multiplication.
11. Faster Solutions for Systems of Equations.A Simple Example. LU Factorization. Cholesky Factorization. Factorization and Parallelization. Forward-Backward Substitution (FBS). Sparse Direct Systems of Equations. Iterative Techniques.
12. High Performance Algorithms and Approaches for Signal Processing.Convolutions and Correlations. DFTs/FFTs. The Relationship Between Convolutions and FFTs.
Index.
Once you start asking questions, innocence is gone.
- Mary Astor
This purpose of this book is to document many of the techniques used by people who implement applications on modern computers and want their programs to execute as quickly as possible.
There are four major components that determine the speed of an application: the architecture, the compiler, the source code, and the algorithm. You usually don't have control over the architecture you use, but you need to understand it so you'll know what it is capable of achieving. You do have control over your source code and how compilers are used on it. This book discusses how to perform source code modifications and use the compiler to generate better performing applications. The final and arguably the most important part is the algorithms used. By replacing the algorithms you have or were given with better performing ones, or even tweaking the existing ones, you can reap huge performance gains and perform problems that had previously been unachievable.
There are many reasons to want applications to execute quickly. Sometimes it is the only way to make sure that a program finishes execution in a reasonable amount of time. For example, the decision to bid or no-bid an oil lease is often determined by whether a seismic image can be completed before the bid deadline. A new automotive body design may or may not appear in next year's model depending on whether the structural and aerodynamic analysis can be completed in time. Since developers of applications would like an advantage over their competitors, speed can sometimes be the differentiator between two similar products. Thus, writing programs to run quickly can be a good investment.
P.1 A Tool Box
We like to think of this book as a tool box. The individual tools are the various optimization techniques discussed. As expected, some tools are more useful than others. Reducing the memory requirements of an application is a general tool that frequently results in better single processor performance. Other tools, such as the techniques used to optimize a code for parallel execution, have a more limited scope.
These tools are designed to help applications perform well on computer system components. You can apply them to existing code to improve performance or use them to design efficient code from scratch. As you become proficient with the tools, some general trends become apparent. All applications have a theoretical performance limit on any computer. The first attempts at optimization may involve choosing between basic compiler options. This doesn't take much time and can help performance considerably. The next steps may involve more complicated compiler options, modifying a few lines of source code, or reformulating an algorithm. The theoretical peak performance is like the speed of light. As more and more energy, or time, is expended, the theoretical peak is approached, but never quite achieved. Before optimizing applications, it is prudent to consider how much time you can, or should, commit to optimization.
In the past, one of the problems with tuning code was that even with a large investment of a time the optimizations quickly became outdated. For example, there were many applications that had been optimized for vector computers which subsequently had to be completely reoptimized for massively parallel computers. This sometimes took many person-years of effort. Since massively parallel computers never became plentiful, much of this effort had very short-term benefit.
In the 1990s, many computer companies either went bankrupt or were purchased by other companies as the cost of designing and manufacturing computers skyrocketed. As a result, there are very few computer vendors left today and most of today's processors have similar characteristics. For example, they nearly all have high-speed caches. Thus, making sure that code is structured to run well on cache-based systems ensures that the code runs well across almost all modern platforms.
The examples in this book are biased in favor of the UNIX operating system and RISC processors. This is because they are most characteristic of modern high performance computing. The recent EPIC (IA64) processors have cache structures identical to those of RISC processors, so the examples also apply to them.
P.2 Language Issues
This book uses lots of examples. They are written in Fortran, C, or in a language-independent pseudocode. Fortran examples use uppercase letters while the others use lowercase. For example,
DO I = 1,N
Y(I) = Y(I) + A * X(I)
ENDDO
takes a scalar A, multiplies it by a vector x of length N and adds it to a vector Y of length N. Languages such as Fortran 90/95 and C++ are very powerful and allow vector or matrix notation. For example, if x and Y are two-dimensional arrays and A is a scalar, writing
Y = Y + A * X
means to multiple the array x by A and add the result to the matrix Y This notation has been avoided since it can obscure the analysis performed. The notation may also make it more difficult to compilers to optimize the source code.
There is an entire chapter devoted to language specifics, but pseudo-code and Fortran examples assume that multidimensional arrays such as Y (200, 100) have the data stored in memory in column-major order. Thus the elements of Y (200, 100) are stored as
Y(1,1), Y(2,1), Y(3,1),..., Y(200,1), Y(1,2), Y(1,3),...
This is the opposite of C data storage where data is stored in row-major order.
P.3 Notation
When terms are defined, we'll use italics to set the term apart from other text. Courier font will be used for all examples. Mathematical terms and equations use italic font. We'll use lots of prefixes for the magnitude of measurements, so the standard ones are defined in the following table.
| ||
Prefix | Factor | Factor |
---|---|---|
tera | 1012 | 240 |
giga | 109 | 230 |
mega | 106 | 220 |
kilo | 103 | 210 |
milli | 10-3 | |
micro | 10-6 | |
nano | 10-9 |
Note that some prefixes are defined using both powers of 10 and powers of two. The exact arithmetic values are somewhat different. Observe that 106 = 1,000,000 while 210 = 1,048,576. This can be confusing, but when quantifying memory, cache, or data in general, associate the prefixes with powers of two. Otherwise, use the more common powers of 10.
Finally, optimizing applications should be fun. It's really a contest between you and the computer. Computers sometimes give up performance grudgingly, so understand what the computer is realistically capable of and see that you get it. Enjoy the challenge!