- A 3D Canvas
- WebGL ES
- Compiling Shaders
- Passing Data to the GPU
- Overall
Passing Data to the GPU
The GPU expects two things: programs and data. We've already seen how to load the two programs that the GPU needs, but it still needs some data to render. The data comes in several forms. The two large datasets that you'll need to pass to the GPU are textures and models. Textures contain raster images that are wrapped around models, which are collections of vertex data used to define shapes in 3D shapes.
Textures tend to be quite large. Fortunately, JavaScript already has some support for dealing with this kind of data. Images have been part of the Web since Mosaic, and JavaScript provided an Image object to represent them. Creating a new Image object and associating a URL with its src attribute will cause the browser to load the image data asynchronously. You just need to copy it across to the GPU. This happens in exactly the same way as with OpenGL: You bind the image data to a new Image object with the context object's texImage2D() method. This approach is quite similar to the glTexImage2D() function in the OpenGL ES API, although the final argument is a WebGLArray object containing the texel values, rather than a pointer.
WebGL also provides versions of this function that are more convenient. You don't need to specify the size of the image if you're providing an Image object rather than an array of texel values, because the Image object contains this information already. More interestingly, you're not restricted to providing just static data. You can also provide an HTML 5 canvas or video element.
The first of these options has some quite interesting applications. For example, you can write a 3D compositing windowing system in WebGL. Each component of your application would render to a 2D HTML canvas. You'd get a texture from each of these, and then you'd use WebGL for drawing them into the browser.
Note that in both of these cases you need to update the texture manually when the source changes. You can do this with the <video> tag easily, by registering an event listener for the timeupdate event. For the <canvas> tag, pushing the changes yourself is generally better, because you know when you've finished a drawing[md]you don't want to copy the entire canvas across to the GPU just because you've drawn a single line.
Unfortunately, loading vertex data has no similarly convenient mechanism. If WebGL incorporated some well-defined binary vertex data format that could be generated from 3D modeling programs and loaded directly by the browser, without going via JavaScript, that would be nice[md]but unfortunately it doesn't work that way (yet).
When you load vertex data to the GPU, you need to use the WebGLArray object hierarchy. WebGLArray defines an abstract interface for collections of primitive data. Its most common subtype that you'll use is WebGLFloatArray, which defines an array of floating-point values, which are basically range-checked wrappers around a blob of memory.
You can use these values to do the sort of unsafe pointer cast that JavaScript is designed to prevent. Each array object has an associated array buffer object. This object is a JavaScript equivalent of C's void* and a size. Effectively, you can think of these two as equivalent:
new WebGLArrayBuffer(n) calloc(1, n)
If you associate the same WebGLArrayBuffer object with two typed array objects (for example, a WebGLFloatArray and a WebGLByteArray), you can access the data in both formats. This is the equivalent of performing a cast between a char* and a float* in C.
Once you've created data in this form, from any source, you can create a vertex buffer object using the createBuffer() method on the context, bind it with bindBuffer(), and load the data into it with bufferData(). All of these calls are directly equivalent to their OpenGL ES counterparts.