8.4. Conditional Code
The hardware implements “condition code” or CC registers that contain the usual 4-bit state vector (sign, carry, zero, overflow) used for integer comparison. These CC registers can be set using comparison instructions such as ISET, and they can direct the flow of execution via predication or divergence. Predication allows (or suppresses) the execution of instructions on a per-thread basis within a warp, while divergence is the conditional execution of longer instruction sequences. Because the processors within an SM execute instructions in SIMD fashion at warp granularity (32 threads at a time), divergence can result in fewer instructions executed, provided all threads within a warp take the same code path.
8.4.1. Predication
Due to the additional overhead of managing divergence and convergence, the compiler uses predication for short instruction sequences. The effect of most instructions can be predicated on a condition; if the condition is not TRUE, the instruction is suppressed. This suppression occurs early enough that predicated execution of instructions such as load/store and TEX inhibits the memory traffic that the instruction would otherwise generate. Note that predication has no effect on the eligibility of memory traffic for global load/store coalescing. The addresses specified to all load/store instructions in a warp must reference consecutive memory locations, even if they are predicated.
Predication is used when the number of instructions that vary depending on a condition is small; the compiler uses heuristics that favor predication up to about 7 instructions. Besides avoiding the overhead of managing the branch synchronization stack described below, predication also gives the compiler more optimization opportunities (such as instruction scheduling) when emitting microcode. The ternary operator in C (? :) is considered a compiler hint to favor predication.
Listing 8.2 gives an excellent example of predication, as expressed in microcode. When performing an atomic operation on a shared memory location, the compiler emits code that loops over the shared memory location until it has successfully performed the atomic operation. The LDSLK (load shared and lock) instruction returns a condition code that tells whether the lock was acquired. The instructions to perform the operation then are predicated on that condition code.
/*0058*/ LDSLK P0, R2, [R3]; /*0060*/ @P0 IADD R2, R2, R0; /*0068*/ @P0 STSUL [R3], R2; /*0070*/ @!P0 BRA 0x58;
This code fragment also highlights how predication and branching sometimes work together. The last instruction, a conditional branch to attempt to reacquire the lock if necessary, also is predicated.
8.4.2. Divergence and Convergence
Predication works well for small fragments of conditional code, especially if statements with no corresponding else. For larger amounts of conditional code, predication becomes inefficient because every instruction is executed, regardless of whether it will affect the computation. When the larger number of instructions causes the costs of predication to exceed the benefits, the compiler will use conditional branches. When the flow of execution within a warp takes different paths depending on a condition, the code is called divergent.
NVIDIA is close-mouthed about the details of how their hardware supports divergent code paths, and it reserves the right to change the hardware implementation between generations. The hardware maintains a bit vector of active threads within each warp. For threads that are marked inactive, execution is suppressed in a way similar to predication. Before taking a branch, the compiler executes a special instruction to push this active-thread bit vector onto a stack. The code is then executed twice, once for threads for which the condition was TRUE, then for threads for which the predicate was FALSE. This two-phased execution is managed with a branch synchronization stack, as described by Lindholm et al.15
- If threads of a warp diverge via a data-dependent conditional branch, the warp serially executes each branch path taken, disabling threads that are not on that path, and when all paths complete, the threads reconverge to the original execution path. The SM uses a branch synchronization stack to manage independent threads that diverge and converge. Branch divergence only occurs within a warp; different warps execute independently regardless of whether they are executing common or disjoint code paths.
The PTX specification makes no mention of a branch synchronization stack, so the only publicly available evidence of its existence is in the disassembly output of cuobjdump. The SSY instruction pushes a state such as the program counter and active thread mask onto the stack; the .S instruction prefix pops this state and, if any active threads did not take the branch, causes those threads to execute the code path whose state was snapshotted by SSY.
SSY/.S is only necessary when threads of execution may diverge, so if the compiler can guarantee that threads will stay uniform in a code path, you may see branches that are not bracketed by SSY/.S. The important thing to realize about branching in CUDA is that in all cases, it is most efficient for all threads within a warp to follow the same execution path.
The loop in Listing 8.2 also includes a good self-contained example of divergence and convergence. The SSY instruction (offset 0x40) and NOP.S instruction (offset 0x78) bracket the points of divergence and convergence, respectively. The code loops over the LDSLK and subsequent predicated instructions, retiring active threads until the compiler knows that all threads will have converged and the branch synchronization stack can be popped with the NOP.S instruction.
/*0040*/ SSY 0x80; /*0048*/ BAR.RED.POPC RZ, RZ; /*0050*/ LD R0, [R0]; /*0058*/ LDSLK P0, R2, [R3]; /*0060*/ @P0 IADD R2, R2, R0; /*0068*/ @P0 STSUL [R3], R2; /*0070*/ @!P0 BRA 0x58; /*0078*/ NOP.S CC.T;
8.4.3. Special Cases: Min, Max, and Absolute Value
Some conditional operations are so common that they are supported natively by the hardware. Minimum and maximum operations are supported for both integer and floating-point operands and are translated to a single instruction. Additionally, floating-point instructions include modifiers that can negate or take the absolute value of a source operand.
The compiler does a good job of detecting when min/max operations are being expressed, but if you want to take no chances, call the min()/max() intrinsics for integers or fmin()/fmax() for floating-point values.