6.2 Vertex Shader
The vertex shader embodies the operations that will occur on each vertex that is provided to OpenGL. To define our vertex shader, we need to answer three questions.
-
What data must be passed to the vertex shader for every vertex (i.e., attribute variables)?
-
What global state is required by the vertex shader (i.e., uniform variables)?
-
What values are computed by the vertex shader (i.e., varying variables)?
Let's look at these questions one at a time.
We can't draw any geometry at all without specifying a value for each vertex position. Furthermore, we can't do any lighting unless we have a surface normal for each location for which we want to apply a lighting computation. So at the very least, we'll need a vertex position and a normal for every incoming vertex. These attributes are already defined as part of OpenGL, and the OpenGL Shading Language provides built-in variables to refer to them (gl_Vertex and gl_Normal). If we use the standard OpenGL entry points for passing vertex positions and normals, we don't need any user-defined attribute variables in our vertex shader. We can access the current values for vertex position and normal simply by referring to gl_Vertex and gl_Normal.
We need access to several pieces of OpenGL state for our brick algorithm. These are available to our shader as built-in uniform variables. We'll need to access the current modelview-projection matrix (gl_ModelViewProjectionMatrix) in order to transform our vertex position into the clipping coordinate system. We'll need to access the current modelview matrix (gl_ModelViewMatrix) in order to transform the vertex position into eye coordinates for use in the lighting computation. And we'll also need to transform our incoming normals into eye coordinates using OpenGL's normal transformation matrix (gl_NormalMatrix, which is just the inverse transpose of the upper-left 3 × 3 subset of gl_ModelViewMatrix).
In addition, we'll need the position of a single light source. We could use the OpenGL lighting state and reference that state within our vertex shader, but in order to illustrate the use of uniform variables, we'll define the light source position as a uniform variable like this:1
uniform vec3 LightPosition;
We also need values for the lighting calculation to represent the contribution due to specular reflection and the contribution due to diffuse reflection. We could define these as uniform variables so that they could be changed dynamically by the application, but in order to illustrate some additional features of the language, we'll define them as constants like this:
const float SpecularContribution = 0.3; const float DiffuseContribution = 1.0 - SpecularContribution;
Finally, we need to define the values that will be passed on to the fragment shader. Every vertex shader must compute the homogeneous vertex position and store its value in the standard variable gl_Position, so we know that our brick vertex shader will need to do likewise. We're going to compute the brick pattern on-the-fly in the fragment shader as a function of the incoming geometry's x and y values in modeling coordinates, so we'll define a varying variable called MCposition for this purpose. In order to apply the lighting effect on top of our brick, we'll need to do part of the lighting computation in the fragment shader and apply the final lighting effect after the brick/mortar color has been computed in the fragment shader. We'll do most of the lighting computation in the vertex shader and simply pass the computed light intensity to the fragment shader in a varying variable called LightIntensity. These two varying variables are defined like this:
varying float LightIntensity; varying vec2 MCposition;
We're now ready to get to the meat of our brick vertex shader. We begin by declaring a main function for our vertex shader and computing the vertex position in eye coordinates:
void main(void) { vec3 ecPosition = vec3 (gl_ModelViewMatrix * gl_Vertex);
In this first line of code, our vertex shader defines a variable called ecPosition to hold the eye coordinate position of the incoming vertex. The eye coordinate position is computed by transforming the vertex position (gl_Vertex) by the current modelview matrix (gl_ModelViewMatrix). Because one of the operands is a matrix and the other is a vector, the * operator performs a matrix multiplication operation rather than a component-wise multiplication.
The result of the matrix multiplication will be a vec4, but ecPosition is defined as a vec3. There is no automatic conversion between variables of different types in the OpenGL Shading Language so we convert the result to a vec3 using a constructor. This causes the fourth component of the result to be dropped so that the two operands have compatible types. (Constructors provide an operation that is similar to type casting, but it is much more flexible, as discussed in Section 3.3). As we'll see, the eye coordinate position will be used a couple of times in our lighting calculation.
The lighting computation that we'll perform is a very simple one. Some light from the light source will be reflected in a diffuse fashion (i.e., in all directions). Where the viewing direction is very nearly the same as the reflection direction from the light source, we'll see a specular reflection. To compute the diffuse reflection, we'll need to compute the angle between the incoming light and the surface normal. To compute the specular reflection, we'll need to compute the angle between the reflection direction and the viewing direction. First, we'll transform the incoming normal:
vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal);
This line defines a new variable called tnorm for storing the transformed normal (remember, in the OpenGL Shading Language, variables can be declared when needed). The incoming surface normal (gl_Normal, a built-in variable for accessing the normal value supplied through the standard OpenGL entry points) is transformed by the current OpenGL normal transformation matrix (gl_NormalMatrix). The resulting vector is normalized (converted to a vector of unit length) by calling the built-in function normalize, and the result is stored in tnorm.
Next, we need to compute a vector from the current point on the surface of the three-dimensional object we're rendering to the light source position. Both of these should be in eye coordinates (which means that the value for our uniform variable LightPosition must be provided by the application in eye coordinates). The light direction vector is computed as follows:
vec3 lightVec = normalize(LightPosition - ecPosition);
The object position in eye coordinates was previously computed and stored in ecPosition. To compute the light direction vector, we need to subtract the object position from the light position. The resulting light direction vector is also normalized and stored in the newly defined local variable lightVec.
The calculations we've done so far have set things up almost perfectly to call the built-in function reflect. Using our transformed surface normal and the computed incident light vector, we can now compute a reflection vector at the surface of the object; however, reflect requires the incident vector (the direction from the light to the surface), and we've computed the direction to the light source. Negating lightVec gives us the proper vector:
vec3 reflectVec = reflect(-lightVec, tnorm);
Because both vectors used in this computation were unit vectors, the resulting vector is a unit vector as well. To complete our lighting calculation, one more vector is neededa unit vector in the direction of the viewing position. Because, by definition, the viewing position is at the origin (i.e., (0,0,0)) in the eye coordinate system, we simply need to negate and normalize the computed eye coordinate position, ecPosition:
vec3 viewVec = normalize(-ecPosition);
With these four vectors, we can perform a per-vertex lighting computation. The relationship of these vectors is shown in Figure 6.2.
Figure 6.2. Vectors involved in the lighting computation for the brick vertex shader
Diffuse reflection is modeled by assuming that the incident light is scattered in all directions according to a cosine distribution function. The reflection of light will be strongest when the light direction vector and the surface normal are coincident. As the difference between the two angles increases to 90o, the diffuse reflection will drop off to zero. Because both vectors have been normalized to produce unit vectors, the cosine of the angle between lightVec and tnorm can be determined by performing a dot product operation between them. We want the diffuse contribution to be 0 if the angle between the light and the surface normal is greater than 90o (there should be no diffuse contribution if the light is behind the object), and the max function is used to accomplish this:
float diffuse = max(dot(lightVec, tnorm), 0.0);
The specular component of the light intensity for this vertex is computed by
float spec = 0.0; if (diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); }
The variable for the specular reflection value is defined and initialized to 0. We'll compute only a specular value other than 0 if the angle between the light direction vector and the surface normal is less than 90o (i.e., the diffuse value is greater than 0) because we don't want any specular highlights if the light source is behind the object. Because both reflectVec and viewVec are normalized, computing the dot product of these two vectors gives us the cosine of the angle between them. If the angle is near zero (i.e., the reflection vector and the viewing vector are almost the same), the resulting value will be near 1.0. By raising the result to the 16th power in the subsequent line of code, we're effectively "sharpening" the highlight, ensuring that we'll have a specular highlight only in the region where the reflection vector and the view vector are almost the same. The choice of 16 for the exponent value is arbitrary. Higher values will produce more concentrated specular highlights, and lower values will produce less concentrated highlights. This value could also be passed in as a uniform variable in order to allow it to be easily modified by the end user.
All that remains is to multiply the computed diffuse and specular reflection values by the diffuseContribution and specularContribution constants and add the two values together:
LightIntensity = DiffuseContribution * diffuse + SpecularContribution * spec;
This is the value that will be assigned to the varying variable LightIntensity and interpolated between vertices. We also have one other varying variable to compute, and it is done quite easily:
MCposition = gl_Vertex.xy;
When the brick pattern is applied to a geometric object, we want the brick pattern to remain constant with respect to the surface of the object, no matter how the object is moved. We also want the brick pattern to remain constant with respect to the surface of the object, no matter what the viewing position. In order to generate the brick pattern algorithmically in the fragment shader, we need to provide a value at each fragment that represents a location on the surface. For this example, we will provide the modeling coordinate at each vertex by setting our varying variable MCposition to the same value as our incoming vertex position (which is, by definition, in modeling coordinates).
We're not going to need the z or w coordinate in the fragment shader, so we need a way to select the x and y components of gl_Vertex. We could have used a constructor here (e.g., vec2 (gl_Vertex)), but to show off another language feature, we'll use the component selector .xy to select the first two components of gl_Vertex and store them in our varying variable MCposition.
The only thing that remains to be done is the thing that must be done by all vertex shaders: computing the homogeneous vertex position. We do this by transforming the incoming vertex value by the current modelview-projection matrix using the built-in function ftransform:
gl_Position = ftransform(); }
For clarity, the code for our vertex shader is provided in its entirety in Listing 6.1.
Listing 6.1. Source code for brick vertex shader
uniform vec3 LightPosition; const float SpecularContribution = 0.3; const float DiffuseContribution = 1.0 - SpecularContribution; varying float LightIntensity; varying vec2 MCposition; void main(void) { vec3 ecPosition = vec3 (gl_ModelViewMatrix * gl_Vertex); vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal); vec3 lightVec = normalize(LightPosition - ecPosition); vec3 reflectVec = reflect(-lightVec, tnorm); vec3 viewVec = normalize(-ecPosition); float diffuse = max(dot(lightVec, tnorm), 0.0); float spec = 0.0; if (diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); } LightIntensity = DiffuseContribution * diffuse + SpecularContribution * spec; MCposition = gl_Vertex.xy; gl_Position = ftransform(); }