6.2 Vertex Shader
The vertex shader embodies the operations that occur on each vertex that is provided to OpenGL. To define our vertex shader, we need to answer three questions.
- What data must be passed to the vertex shader for every vertex (i.e., generic attribute variables or in variables)?
- What global state is required by the vertex shader (i.e., uniform variables)?
- What values are computed by the vertex shader (i.e., out variables)?
Let's look at these questions one at a time.
We can't draw any geometry at all without specifying a value for each vertex position. Furthermore, we can't do any lighting unless we have a surface normal for each location for which we want to apply a lighting computation. So at the very least, we need a vertex position and a normal for every incoming vertex. We'll define a global in-qualified variable to hold the MCvertex (vertex in model coordinates), and a global in-qualified variable to hold the MCnormal (normal in model coordinates):
in vec4 MCvertex; in vec3 MCnormal;
We need access to several matrices for our brick algorithm. We need to access the current modelview-projection matrix (MVPMatrix) in order to transform our vertex position into the clipping coordinate system. We need to access the current modelview matrix (MVMatrix) in order to transform the vertex position into eye coordinates for use in the lighting computation. And we also need to transform our incoming normals into eye coordinates by using OpenGL's normal transformation matrix (NormalMatrix, which is just the inverse transpose of the upper-left 3 x 3 subset of MVMatrix).
uniform mat4 MVMatrix; uniform mat4 MVPMatrix; uniform mat3 NormalMatrix;
In addition, we need the position of a single light source. We define the light source position as a uniform variable like this:1
uniform vec3 LightPosition;
We also need values for the lighting calculation to represent the contribution from specular reflection and the contribution from diffuse reflection. We could define these as uniform variables so that they could be changed dynamically by the application, but to illustrate some additional features of the language, we define them as constants like this:
const float SpecularContribution = 0.3; const float DiffuseContribution = 1.0 - SpecularContribution;
Finally, we need to define the values that are passed on to the fragment shader. Every vertex shader must compute the homogeneous vertex position and store its value in the standard variable gl_Position, so we know that our brick vertex shader must do likewise. On the fly, we compute the brick pattern in the fragment shader as a function of the incoming geometry's x and y values in modeling coordinates, so we define an out variable called MCposition for this purpose. To apply the lighting effect on top of our brick, we do part of the lighting computation in the fragment shader and apply the final lighting effect after the brick/mortar color has been computed in the fragment shader. We do most of the lighting computation in the vertex shader and simply pass the computed light intensity to the fragment shader in an out variable called LightIntensity. These two out variables are defined like this:
out float LightIntensity; out vec2 MCposition;
We're now ready to get to the meat of our brick vertex shader. We begin by declaring a main function for our vertex shader and computing the vertex position in eye coordinates:
void main() { vec3 ecPosition = vec3(MVMatrix * MCvertex);;
In this first line of code, our vertex shader defines a variable called ecPosition to hold the eye coordinate position of the incoming vertex. We compute the eye coordinate position by transforming the vertex position (MCvertex) by the current modelview matrix (MVMatrix). Because one of the operands is a matrix and the other is a vector, the * operator performs a matrix multiplication operation rather than a component-wise multiplication.
The result of the matrix multiplication is a vec4, but ecPosition is defined as a vec3. There is no automatic conversion between variables of different types in the OpenGL Shading Language, so we convert the result to a vec3 by using a constructor. This causes the fourth component of the result to be dropped so that the two operands have compatible types. (Constructors provide an operation that is similar to type casting, but it is much more flexible, as discussed in Section 3.3.) As we'll see, the eye coordinate position is used a couple of times in our lighting calculation.
The lighting computation that we perform is a simple one. Some light from the light source is reflected in a diffuse fashion (i.e., in all directions). Where the viewing direction is very nearly the same as the reflection direction from the light source, we see a specular reflection. To compute the diffuse reflection, we need to compute the angle between the incoming light and the surface normal. To compute the specular reflection, we need to compute the angle between the reflection direction and the viewing direction. First, we transform the incoming normal:
vec3 tnorm = normalize(NormalMatrix * MCnormal);
This line defines a new variable called tnorm for storing the transformed normal (remember, in the OpenGL Shading Language, variables can be declared when needed). The incoming surface normal (MCnormal, a user-defined in variable for accessing the normal value through a generic vertex attribute) is transformed by the current normal transformation matrix (NormalMatrix). The resulting vector is normalized (converted to a vector of unit length) by the built-in function normalize, and the result is stored in tnorm.
Next, we need to compute a vector from the current point on the surface of the three-dimensional object we're rendering to the light source position. Both of these should be in eye coordinates (which means that the value for our uniform variable LightPosition must be provided by the application in eye coordinates). The light direction vector is computed as follows:
vec3 lightVec = normalize(LightPosition - ecPosition);
The object position in eye coordinates was previously computed and stored in ecPosition. To compute the light direction vector, we subtract the object position from the light position. The resulting light direction vector is also normalized and stored in the newly defined local variable lightVec.
The calculations we've done so far have set things up almost perfectly to call the built-in function reflect. Using our transformed surface normal and the computed incident light vector, we can now compute a reflection vector at the surface of the object; however, reflect requires the incident vector (the direction from the light to the surface), and we've computed the direction to the light source. Negating lightVec gives us the proper vector:
vec3 reflectVec = reflect(-lightVec, tnorm);
Because both vectors used in this computation were unit vectors, the resulting vector is a unit vector as well. To complete our lighting calculation, we need one more vector—a unit vector in the direction of the viewing position. Because, by definition, the viewing position is at the origin (i.e., (0,0,0)) in the eye coordinate system, we can simply negate and normalize the computed eye coordinate position, ecPosition:
vec3 viewVec = normalize(-ecPosition);
With these four vectors, we can perform a per-vertex lighting computation. The relationship of these vectors is shown in Figure 6.2.
Figure 6.2 Vectors involved in the lighting computation for the brick vertex shader
The modeling of diffuse reflection is based on the assumption that the incident light is scattered in all directions according to a cosine distribution function. The reflection of light is strongest when the light direction vector and the surface normal are coincident. As the difference between the two angles increases to 90°, the diffuse reflection drops off to zero. Because both vectors have been normalized to produce unit vectors, we can determine the cosine of the angle between lightVec and tnorm by performing a dot product operation between those vectors. We want the diffuse contribution to be 0 if the angle between the light and the surface normal is greater than 90° (there should be no diffuse contribution if the light is behind the object), and the max function accomplishes this:
float diffuse = max(dot(lightVec, tnorm), 0.0);
The specular component of the light intensity for this vertex is computed by
float spec = 0.0; if (diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); }
The variable for the specular reflection value is defined and initialized to 0. We compute a specular value other than 0 only if the angle between the light direction vector and the surface normal is less than 90° (i.e., the diffuse value is greater than 0) because we don't want any specular highlights if the light source is behind the object. Because both reflectVec and viewVec are normalized, computing the dot product of these two vectors gives us the cosine of the angle between them. If the angle is near zero (i.e., the reflection vector and the viewing vector are almost the same), the resulting value is near 1.0. By raising the result to the 16th power in the subsequent line of code, we effectively "sharpen" the highlight, ensuring that we have a specular highlight only in the region where the reflection vector and the view vector are almost the same. The choice of 16 for the exponent value is arbitrary. Higher values produce more concentrated specular highlights, and lower values produce less concentrated highlights. This value could also be passed in as a uniform variable so that it can be easily modified by the end user.
All that remains is to multiply the computed diffuse and specular reflection values by the diffuseContribution and specularContribution constants and sum the two values:
LightIntensity = DiffuseContribution * diffuse + SpecularContribution * spec;
This value will be assigned to the out variable LightIntensity and interpolated between vertices. We also have one other out variable to compute, and we can do that quite easily.
MCposition = MCvertex.xy;
When the brick pattern is applied to a geometric object, we want the brick pattern to remain constant with respect to the surface of the object, no matter how the object is moved. We also want the brick pattern to remain constant with respect to the surface of the object, no matter what the viewing position. To generate the brick pattern algorithmically in the fragment shader, we need to provide a value at each fragment that represents a location on the surface. For this example, we provide the modeling coordinate at each vertex by setting our out variable MCposition to the same value as our incoming vertex position (which is, by definition, in modeling coordinates).
We don't need the z or w coordinate in the fragment shader, so we need a way to select just the x and y components of MCvertex. We could have used a constructor here (e.g., vec2(MCvertex)), but to show off another language feature, we use the component selector .xy to select the first two components of MCvertex and store them in our out variable MCposition.
All that remains to be done is what all vertex shaders must do: compute the homogeneous vertex position. We do this by transforming the incoming vertex value by the current modelview-projection matrix:
gl_Position = MVPMatrix * MCvertex; }
For clarity, the code for our vertex shader is provided in its entirety in Listing 6.1.
Listing 6.1. Source code for brick vertex shader
#version 140 in vec4 MCvertex; in vec3 MCnormal; uniform mat4 MVMatrix; uniform mat4 MVPMatrix; uniform mat3 NormalMatrix; uniform vec3 LightPosition; const float SpecularContribution = 0.3; const float DiffuseContribution = 1.0 - SpecularContribution; out float LightIntensity; out vec2 MCposition; void main() { vec3 ecPosition = vec3(MVMatrix * MCvertex); vec3 tnorm = normalize(NormalMatrix * MCnormal); vec3 lightVec = normalize(LightPosition - ecPosition); vec3 reflectVec = reflect(-lightVec, tnorm); vec3 viewVec = normalize(-ecPosition); float diffuse = max(dot(lightVec, tnorm), 0.0); float spec = 0.0; if (diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); } LightIntensity = DiffuseContribution * diffuse + SpecularContribution * spec; MCposition = MCvertex.xy; gl_Position = MVPMatrix * MCvertex; }