Home > Store

Register your product to gain access to bonus material or receive a coupon.

CUDA Handbook: A Comprehensive Guide to GPU Programming, The

Book

  • This product currently is not for sale.
Not for Sale

Description

  • Copyright 2013
  • Dimensions: 7-3/8" x 9-1/8"
  • Pages: 528
  • Edition: 1st
  • Book
  • ISBN-10: 0-321-80946-7
  • ISBN-13: 978-0-321-80946-9

The CUDA Handbook begins where CUDA by Example (Addison-Wesley, 2011) leaves off, discussing CUDA hardware and software in greater detail and covering both CUDA 5.0 and Kepler. Every CUDA developer, from the casual to the most sophisticated, will find something here of interest and immediate usefulness. Newer CUDA developers will see how the hardware processes commands and how the driver checks progress; more experienced CUDA developers will appreciate the expert coverage of topics such as the driver API and context migration, as well as the guidance on how best to structure CPU/GPU data interchange and synchronization.

The accompanying open source code–more than 25,000 lines of it, freely available at www.cudahandbook.com–is specifically intended to be reused and repurposed by developers.

Designed to be both a comprehensive reference and a practical cookbook, the text is divided into the following three parts:

Part I, Overview, gives high-level descriptions of the hardware and software that make CUDA possible.


Part II, Details, provides thorough descriptions of every aspect of CUDA, including

  •  Memory
  • Streams and events
  •  Models of execution, including the dynamic parallelism feature, new with CUDA 5.0 and SM 3.5
  • The streaming multiprocessors, including descriptions of all features through SM 3.5
  • Programming multiple GPUs
  • Texturing

The source code accompanying Part II is presented as reusable microbenchmarks and microdemos, designed to expose specific hardware characteristics or highlight specific use cases.


Part III, Select Applications, details specific families of CUDA applications and key parallel algorithms, including

  •  Streaming workloads
  • Reduction
  • Parallel prefix sum (Scan)
  • N-body
  • Image Processing
These algorithms cover the full range of potential CUDA applications.

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.