Register your product to gain access to bonus material or receive a coupon.
A Complete, Practical Guide for VRML 2.0 World BuildersVersion 2.0 of the Virtual Reality Modeling Language allows world designers to create interactive animated 3D virtual worlds. The VRML 2.0 Handbook guides readers through the development of such a world, using a VRML reconstruction of the Aztec city Tenochtitlan. This guide offers practical, platform-independent tips and examples from the experts at Silicon Graphics, Inc., leaders in formulating and developing VRML. Detailed examples and diagrams provide a solid foundation in VRML 2.0 for a wide range of content creators, from artists and designers with little programming background to seasoned computer experts with modest graphics skills.
With VRML 2.0, you can create robots and people that walk and run, dogs that bark, and gurgling streams. You can design objects that react to user actions, such as doors that open when clicked. You can include sensors that respond when the user approaches a certain area--triggering an alarm, for instance, or starting an animation. This handbook explains how to use all of VRML 2.0's features, including:
Congratulations! You've built and painted an Aztec temple with your bare hands! But you're not done building your world yet. As any theater lighting designer can tell you, the most beautiful set in the world isn't much good if the audience can't see it; you need to light your temple. Furthermore, without sound it's very hard to create a convincing virtual world. And finally, as you create more advanced worlds you'll want to know how to build certain kinds of complex geometry without having to specify an indexed face set vertex by vertex.
There are three kinds of light nodes in VRML: DirectionalLight, PointLight, and SpotLight.
Lights in VRML are not like lights in the real world. Real lights are physical objects that emit light; you can see the object that emits the light as well as the light it emits, and that light reflects off of various other objects to allow you to see them. In VRML, a lighting node describes how a part of the world should be lit, but does not automatically create any geometry to represent the light source. If you want a light source in your scene to be a visible object, you need to create a piece of geometry (in the shape of a lightbulb, for instance, or of the sun) and place it in the appropriate place in the scene, which usually means at the same location as the light node. You may also want to assign an emissiveColor to the geometry (as part of the associated Appearance node) in order to make it look like the object is glowing; otherwise there's no indication that the object is associated with the light source.
You can turn this lack of geometry to your advantage. You may sometimes want to place a light close to an object without a lighting fixture getting in the way of users looking at the object. This sort of setup is ideal for situations in which making the world look nice is more important than a strict adherence to real-world physics. Theater lighting designers would be overjoyed to be able to light a set without the audience seeing the light fixtures.
Another difference between VRML lights, as currently implemented, and lights in the real world is that objects in VRML don't cast shadows. This fact is due to the way current VRML browsers (and the renderers they're based on) handle lighting; they don't attempt to simulate photons rushing around and bouncing off of things, but instead apply a lighting equation to each piece of geometry drawn, in order to shade surfaces realistically. The lighting equation combines the colors of the object in question (as indicated in the shape's Appearance node) with the colors of light available (as indicated by light nodes). This computation ignores the effects of opaque objects between the light and the geometry being lit.
It's possible to simulate shadows under some circumstances. You have to figure out what shape each shadow would be, and place flat dark semitransparent polygons where the shadow should be. This approach makes it difficult to simulate moving shadows; it's probably best not to bother creating explicit shadows, except possibly for objects that can't be moved and lights that can't be moved. You can create shading effects using per-vertex coloring (see ).
A VRML lighting node lights only certain objects. Each PointLight and SpotLight node has a radius field that indicates how far the node's light can spread; any object outside that radius is not lit by the node, no matter how bright the light may be.
DirectionalLight nodes have a different kind of scope. A directional light affects only sibling objects, objects that are children of the light's parent grouping node. It doesn't affect anything outside of the parent grouping node. You can use this fact to ensure the right scope for a directional light-to keep ceiling lights that are inside a room, for instance, from lighting up anything outside the room. However, the results of this scoping are not always intuitive; a directional light won't affect objects if they're not under the same grouping node as the light, even if the objects are right next to other objects that are lit. If you find that your scene doesn't seem to be lit properly-if objects seem to be lit that shouldn't be, or seem not to be lit when they should be-check to make sure that your directional lights are in the right grouping nodes.
There are four fields that all three light nodes contain: on, color, intensity, and ambientIntensity.
The on field indicates whether the light is currently turned on or not. When a light is off, it doesn't contribute to the light in the scene at all. At first glance, you might wonder why you would want a light in your world if it's turned off; the answer is that you can change the value of this field (and thereby turn a light on or off) by sending an event to the light node with the appropriate value (TRUE or FALSE). For information on how to send events, see .
If you use an object with an emissiveColor to represent the light source in your scene, be sure to modify the emissiveColor (by sending it an event) every time you turn the light node on or off. Otherwise, the light bulb geometry continues to glow even after the light source has been turned off.
The color field indicates the color of the light. The light's color interacts with the various colors specified in an object's Material node to determine the color(s) of the object's surface.
intensity is a floating point value indicating how bright the light is, from 0 (emits no light at all) to 1 (maximum brightness).
The ambientIntensity field affects the ambient (indirect) lighting for lit objects. Its use is somewhat complex; at first you may wish to leave it at its default value of 0. Ambient light in VRML simulates light that doesn't go directly from a light source to an object, including light that's been scattered or reflected before it reaches an object. As such, ambient light does originally come from the light sources in the scene; although it may appear to be sourceless, it's not really.
If a light is on, its contribution to the scene's overall ambient lighting is computed (for each of red, green, and blue values) by multiplying the light's intensity by its ambientIntensity, and multiplying the result by the light's value for that color component. For instance, this light node:
PointLight { on TRUE intensity .75 # three-quarters maximum brightness ambientIntensity .5 color 1 0 0 # red }
contributes .75x; .5 x 1 = .375 to the red portion of the ambient lighting for the scene. All of the ambient light values for all of the lights in the scene are added up and applied to objects within the lights' scopes. Note that this means that changing the light's intensity also changes its contribution to the scene's ambient lighting.
Note that some browsers (and their underlying rendering systems) may consider ambient lighting to be an attribute of the scene rather than of individual lights; such browsers are likely to set up ambient lighting when the scene is loaded and not subsequently change it. Thus, if you change your ambient lighting after the scene is loaded (by routing events to the lights), you risk some users not being able to see the changes.
In the real world, light attenuates: it gets fainter (loses intensity) the farther you are from the light source. Specifically, the intensity of light at a given point is proportional to the inverse of the square of the distance from the light source. VRML can simulate attenuation for lights that have a location (the PointLight and SpotLight nodes), using the attenuation field. That field specifies three coefficients to be used by the browser in calculating an attenuation factor; the browser multiplies the attenuation factor by the light's intensity value to determine intensity at a given distance. The default is no attenuation. Not all browsers can handle full lighting attenuation; if you use attenuation in your scene, some users may not experience it.
Since directional lights don't have a specific location, it's impossible to calculate the distance from the light source, so you can't use attenuation for such lights.
A directional light is a light considered to be far away, or "at infinity," and that therefore illuminates a scene with parallel rays, all from one direction. In a DirectionalLight node, in addition to the standard on, intensity, ambientIntensity, and color values, you specify the direction the light moves toward (as a vector from the origin that parallels the light rays) in the node's direction field.
Directional lights don't take as much processing power as other kinds of lights. You can use them to get reasonably good general lighting for a scene with very little impact on performance.
A point light is located at a specific point in space and illuminates in all directions, like a light bulb. Besides the four common lighting fields, a PointLight node contains fields to specify the light's location, its radius of effect (beyond which nothing is lit by it), and its attenuation.
Point lights are reasonably fast, but some systems can't handle more than a couple of them in a world at once. Every time you add a light to your world, try navigating through the world to make sure performance hasn't dropped too much.
A spotlight is located at a specific point in space and illuminates in a cone pointed in a specified direction. The intensity of the illumination drops off exponentially toward the edges of the cone. The rate of drop-off and the angle of the cone are controlled by the dropOffRate and cutOffAngle fields. Note that SpotLight nodes are often slow to render.
Spotlights are slow and take a lot of processing power. It's probably best to use them as little as you can. Be sure to test for performance every time you add one to your world.
In addition to placing lights, you can place sounds in a scene-anything from audio loops of ambient background noise (such as crickets or rain), to sound effects (bumping into things, explosions), to music, to speech. There are two sound-related nodes: the Sound node, which specifies the spatial parameters of a sound-such as its location and how far away it can be heard-and the AudioClip node, which provides information about a specific sound file to be played. (The MovieTexture node, described in , can also be used as a sound source.) An AudioClip node may occur only in the source field of a Sound node.
A Sound node is located at the specific place in the scene indicated by its location field (in local coordinates, of course). At and near that location, the sound can be heard at maximum volume, which is to say at its recorded volume scaled by the value of the intensity field. The intensity field's value can range from 0 to 1, with 1 indicating the full volume of the sound as given in the sound file, and 0 indicating total silence. An intensity of greater than 1 amplifies the sound but is likely to distort it; if you need the sound to be louder than the original recording, re-record it at a higher volume rather than increasing intensity over 1.
The region in which the sound can be heard is defined by two ellipsoids, each of which has location as a focus. (In most cases, one of these ellipsoids is completely contained within the other.) The inner ellipsoid defines the region in which the sound is played at the given intensity value; moving around within that region produces no change in the sound's volume. In the region between the inner and outer ellipsoids, the sound's volume fades as a function of distance from the inner ellipsoid's surface, into near-inaudibility just inside the edge of the outer ellipsoid. A user outside the outer ellipsoid can't hear the sound at all.
Each ellipsoid is defined by the distances from the location focus to the ends of the ellipsoid along its major axis. The major axis is parallel to the vector given in the direction field, as shown in Figure 5-1. (Unavailable).
The major axis of the inner ellipsoid is defined by the minFront and minBack fields: minFront indicates the distance from location of the forward end of the ellipsoid (the end toward the direction that the direction vector points), and minBack indicates the distance of the other end (in the direction opposite that of the direction vector). Similarly, the major axis of the outer ellipsoid is defined by maxFront and maxBack.
If maxFront is less than minFront or maxBack is less than minBack, the sound can be heard at the given intensity out to the edge of the inner ellipsoid in the appropriate direction, and can't be heard at all beyond that edge.
If you want an omnidirectional sound (where the audible region is a sphere instead of an ellipsoid), just set minFront equal to minBack and maxFront equal to maxBack. In that case, direction is ignored. Omnidirectional sounds are likely to result in improved sound performance, at the cost of a measure of realism.
Note that these ellipsoids do not provide a precisely accurate physical model of how sound propagates in the real world, any more than the VRML lighting model accurately simulates real-world lighting. However, both the sound model and the lighting model are accurate enough to be useful, and simple enough to be computed quickly for use in a real-time environment. You can create realistic sound attenuation by setting maxFront equal to ten times minFront and maxBack equal to ten times minBack; if the outer ellipsoid is more than ten times as long as the inner one, attenuation is slower than in the real world, while if it's less than ten times as long, attenuation is faster.
If your sound ellipsoids are long and thin, the inner and outer ellipsoid surfaces can be very close to each other in places. Users traveling between the ellipsoids anywhere other than the front or back of the ellipsoids may experience abrupt increases or decreases in volume, because the full attenuation (from zero to full volume) occurs over a brief distance. If you want to avoid this phenomenon, use ellipsoids that are closer to being spherical.
You can include multiple sounds in your scene, but be aware that some browsers may be able to play only a limited number of sounds simultaneously. If there are more sounds to be played than can be played at once, browsers use a priority scheme to determine which ones to use. The prioritization method is somewhat complex, but the most important part is that sounds with a high priority field value are more likely to be played, given limited resources, than low-priority sounds. The priority value defaults to 0; if you have a sound that you want to guarantee is played even if there are other sounds playing, you can set its priority higher. Leave priority at 0 for continuous background sounds, such as crickets; set it to 1 for nonlooping event sounds, such as a doorbell. There may be occasions when you need to give a sound a fractional priority, but in most cases 0 or 1 should suffice.
Humans can usually tell what direction a sound is coming from; a sound coming from the left side, for instance, sounds louder in the left ear than in the right. By default, sounds in VRML are spatially localized, so that they sound like they're coming from a particular direction. A monaural sound with its spatialize field left at the default value of TRUE is added into the stereo mix based on the angle from the user's location and orientation to the sound's location. To produce ambient sounds-sounds which don't seem to come from any particular direction-set spatialize to FALSE.
The Sound node provides a location in your world and information on where a sound can be heard; you can think of it as a speaker attached to a stereo system. The AudioClip node, in that analogy, is a tape deck or a CD changer-it specifies which sound is to be played and how to play it.
The usual specification method is to use the url field to give a URL from which to read the sound to be played. You can list multiple URLs in that field, as with all fields that contain URLs; the list should be in order of preference. You can thus provide an audio file in several different formats; each browser then downloads and plays the highest-preference file that's in a format that it understands. Besides the URL of the sound file, you can also provide a string in the description field, a text description of the sound to be displayed in addition to or instead of playing the sound itself.
You can provide sound files in almost any format, but browsers are required only to support WAVE files in uncompressed PCM format; if you want all users to be able to hear your sound files, make sure that at least one of the URLs you list is for a WAVE file. Most browsers support MIDI type 1 format as well.
Along with the sound, you can specify information about when the sound should start playing, its duration, and whether it loops. If you set the value of loop to TRUE (the default is FALSE), the sound is repeated indefinitely. startTime contains a time in SFTime field format (see for details) indicating when the sound should start; you should almost always set this field interactively rather than giving it a value in the file. (For information on changing a field's value by sending it an event, see ) You can set stopTime to the current time (again by routing events to it) to stop a sound.
The final field you can set in an AudioClip node is pitch, which allows you to control the pitch, or frequency, of a sound. A pitch of 1.0, the default, means that the sound should be played at its recorded pitch; 2.0 means all frequencies in the sound file are doubled, which corresponds to playing the sound twice as fast and an octave higher.
AudioClip nodes generate outgoing events called duration_changed and isActive, to let other interested nodes know the total duration (before any changes of pitch) of the current sound, and whether the sound is playing at the current moment, respectively. For instance, if the sound has finished playing (and loop is FALSE), or hasn't started yet, or has been stopped by setting stopTime, then the isActive outgoing event is set to FALSE.
Example 5.1 illustrates a Sound node that might be used in the Aztec city to play the sound of the ceremonial drumming going on in the Great Temple. Note that for demonstration purposes, the startTime and stopTime fields are set so that the drums start playing as soon as you load this scene; normally, you would leave those fields at their default values and set them interactively using events.
#VRML V2.0 utf8
Sound {
sourceAudioClip {
description "temple drums"
url "drums.wav"
loop TRUE
startTime 1
stopTime 0
}
minFront 10
maxFront 100
minBack 0.4
maxBack 4
}
In this example, the drums can be heard faintly up to a hundred meters away from the front but can't be heard from behind unless the user is fairly close. These specifications simulate the fact that the drummers are inside a mostly enclosed room with an open front; if the drummers were out in the open, it would make more sense to set minBack equal to minFront and maxBack equal to maxFront to make the drums equally audible in all directions.
If a sound is completely inside a rectangular walled-off space, you can ensure its inaudibility outside that space by using a ProximitySensor to activate the sound. See for details of ProximitySensor use.
A couple of shape nodes are more complex than the ones you've seen so far, and require more explanation. The ElevationGrid node makes it easy to model terrain in a compact form. The Extrusion node allows you to create compact representations of various complex shapes such as extrusions and surfaces of revolution.
If you want to represent terrain features-from mountains to tiny irregularities in the ground surface-the ElevationGrid node is your best choice. This node provides a compact way to represent ground that varies in height over an area.
The node specifies a rectangular grid and the height of the ground at each intersection in that grid. The xDimension and zDimension fields specify the number of grid points in the x and z directions, respectively, defining a grid of zDimension by xDimension lines in the xz plane.
NOTE: Many people are used to modeling terrain in the xy plane, with height values in z. In VRML, however, the xz plane is considered to be horizontal, and vertical distances are measured along the y axis. The horizontal grid of the ElevationGrid node therefore lies in the xz plane, with height values in y, so that you won't have to rotate the terrain to make it horizontal. If you're used to grids with rows that parallel the x axis and columns that parallel the y axis, be careful to remember that the columns of an elevation grid in VRML are parallel to the z axis.
Figure 5-2 (unavailable) shows a diagram of a sample ElevationGrid node; of course when a real elevation grid is displayed by a browser, the grid lines and numbers aren't shown.
In this figure, there are six rows in the z direction (numbered 0 through 5), so zDimension is 6. Similarly, xDimension is 9, because there are nine columns (numbered 0 through 8) in the x direction.
The height field is a list of height values, one for each vertex in order. The vertices of row 0 are listed first, followed by the vertices in row 1, then row 2, and so on up through the last row. For the elevation grid shown in Figure 5-2, for instance, the heights at grid points in the first row are given by the first nine values in the height field; the next nine values give heights in the second row (row 1), and so on.
The xSpacing and zSpacing fields allow you to scale the entire grid to whatever size you want in each horizontal direction. The xSpacing value listed in that field gives the distance in the x direction between adjacent columns, and the zSpacing value gives the distance in the z direction between adjacent rows.
If the diagram were translated into an ElevationGrid node, the node would look something like this:
ElevationGrid { xDimension 9 zDimension 6 xSpacing 2.1 zSpacing 2 height [ 0, 0, 0.2, 0, 0, 0, 0, 0, 0, # row 0 0, 0.8, 0.4, 0.2, -0.2, 0.2, 0.4, 0.2, 0, # row 1 0, 1, 0.6, 0.4, 0.2, 0.4, 0.2, -0.2, 0, # row 2 0, 0.8, 0, 0.4, -0.2, 0.2, -0.4, 0.1, 0, # row 3 0, 0.2, -0.4, -0.2, 0, 0.4, 0.2, 0.4, 0, # row 4 0, 0, 0, 0, 0, 0, 0, 0, 0 # row 5 ] }
The fields uniquely determine a set of vertices to use for the terrain. It's then the browser's job to create the terrain surface to display by interpolating surfaces between the given vertices. Since the quadrilaterals of the terrain surface are unlikely to be planar, each one is broken up by the browser into a pair of triangles. Note that different browsers may perform this triangulation differently, resulting in slightly different terrain displays from browser to browser.
There's one further type of geometry node besides those discussed so far: the Extrusion node.
An Extrusion node is something like a more general version of a cylinder. It consists of a 2D polygon, defined in the crossSection field, which sweeps out a path through space (as indicated by the spine field and modified by the other fields) to define a surface in three dimensions.
The crossSection and spine paths are both piecewise linear; that is, they're composed of straight line segments. You specify each of them as a series of vertices to be connected in order. To produce a cylinder or other shape with a curved cross-section using an Extrusion node, you have to specify many points spaced close together to approximate a curve.
An extruded star (Figure 5-3 unavailable) illustrates a simple example of an Extrusion node: a 2D path, defined by the crossSection field, extruded through space along a short linear spine. A spine path may, of course, consist of more than one linear segment.
Here's the VRML file that describes the object:
#VRML V2.0 utf8 Shape { appearance Appearance { material Material { } } geometry Extrusion { crossSection [ 1 0, .67 -.27, .71 -.71, .27 -.67, 0 -1, -.27 -.67, -.71 -.71, -.67 -.27, -1 0, -.67 .27, -.71 .71, -.27 .67, 0 1, .27 .67, .71 .71, .67 .27, 1 0 ] spine [ 0 0 0, 0 0 -6 ] beginCap FALSE endCap FALSE solid FALSE } }
The browser follows these steps to form an extruded surface:
Besides the extruded surface, an Extrusion can have a cap at either end. If beginCap is TRUE, a cap is placed across the end of the Extrusion corresponding to the first vertex in spine; if endCap is TRUE, a cap is placed across the other end. The caps are generated by filling in the shape formed by crossSection; if crossSection isn't a closed path (that is, if the first and last points listed aren't the same), the cap is generated as if the first point in crossSection were added to the end (that is, it connects the final point to the initial point).
You can use Extrusion nodes to create many different sorts of shapes. A nonclosed crossSection with caps, for instance, could describe a cylinder sliced lengthwise, like a Quonset hut. A spine that approximates a helix could provide a basis for a 3D DNA model. And by coiling the spine and varying the scale factor, you can produce a snake like the snake statue in the Aztec temple, as shown in Figure 5-7 (unavailable) .
The head of the snake is an indexed face set; the body is an extrusion; and the tail is another extrusion using a differently shaped cross-section.
Now you understand the basics of modeling a static scene. VRML allows you to go one step further and bring your scene to life, using the animation techniques discussed in . If you want to construct a simple static scene and publish it, just to try out all you've learned so far, you can skip ahead at this point to to learn how to publish a VRML file on the World Wide Web. But static scenes usually aren't very interesting; to get the full potential out of your scenes, be sure to come back to read Chapters and to learn about animation and Chapter 8 to learn about advanced use of colors and textures.
Figures.
Examples.
Foreword.
Acknowledgments.
About This Book.
What This Book Contains.
How to Use This Book.
Conventions Used in This Book.
Related Reading.
About the Aztec Site.
Credits.
The Changing VRML World.
1. Introduction.
3D Models versus 2D Images.
Cutting-Edge Technology.
A Brief Look at the Development of VRML.
What’s New in VRML 2.0?
Enhanced Static Worlds.
Interaction.
Animation and Behavior Scripting.
Prototyping.
VRML File Information.
Locating Documents on the Web.
Browser and Server.
Viewing VRML Scenes.
Types of Internet Access.
Gateway Service.
Dial-Up Direct Connection.
Dedicated Direct Connection.
Finding an Internet Service Provider (ISP).
VRML Browsers.
Creating VRML Scenes.
Publishing Your Work.
Exploring Aztec City.
Do-It-Yourself Tour.
Guided Tour.
On Your Way.
The Eagle Lands.
Temple of Quetzalcoatl.
At the Base of the Temple.
Texture Mapping to Add Details.
View from the Top.
Reusing Objects.
Exploring the Shrines.
Traveling through Time.
Building a World.
Creating Objects.
Using External Files: Inline Nodes.
Using Multiple Instances of an Object.
Linking to Other Objects.
Combining Objects into Worlds.
Looking at the Scene.
Interacting with the Scene.
Starting from Scratch.
Develop a Story Board.
Build Objects.
Add Animation and Scripts.
Refine and Test.
Moving On.
Starting Your Temple.
Transformations.
Translation and the Standard Unit of Distance.
Rotation.
Scaling.
Combining Transformations.
Order of Transformations.
Geometry.
Simple Geometry Nodes.
Irregular Geometry.
Text (Flat).
Appearances.
Appearance Nodes.
Materials.
Textures.
Prototypes.
Fields Versus Events.
EXTERNPROTO.
Lights.
Scope of Lights.
Common Attributes of Lights.
Attenuation.
DirectionalLight Nodes.
PointLight Nodes.
SpotLight Nodes.
Sound.
AudioClip.
Complex Shapes.
Terrain Modeling with the ElevationGrid Node.
Extrusions.
What’s Next?
Events and Routes Revisited.
The Animation Event Path.
Triggers and Targets.
Timers.
Engines.
Animation Hints.
Script Node Syntax.
How Scripts Handle Events.
Special Functions.
Field Types in JavaScript.
Scripting and Animation.
Locate-Highlighting: A Glowing Skull.
Switching among Choices: The Eagle Has Landed.
Other Fittings.
Logic.
Computed Animation.
Advanced Scripting.
The Browser Script Interface (Browser API).
Scene Hierarchy Manipulation.
Binding the Browser to a Node.
Network Access.
Multiuser Worlds.
Colors.
Specifying Colors Per Face.
Specifying Colors Per Vertex.
Lines and Points.
Normals.
Using Default Normals.
Specifying Normals Per Face.
Specifying Normals Per Vertex.
Advanced Textures.
What Is a Texture Map?
Movie Textures.
Components of a Texture.
Combining Textures, Colors, and Materials.
Specifying Texture Coordinates.
Transforming a Texture.
Repeating or Clamping a Texture.
How to Specify a Pixel Texture.
Backgrounds with Textures.
Creating the Panorama Scene.
Adding Ground and Sky Colors.
Combining a Panorama with Ground and Sky Color.
Setting Up a Server.
Security Issues.
Configuring a Server to Recognize VRML Files.
Your URL.
Organizing and Publishing Your Files.
Use Relative Addresses.
Use MIME Type Extensions.
Verify Remote URLs.
Add Information Nodes.
Compress the Files.
Announce Your Work on the Web.
Using the Common Gateway Interface (CGI).
HTML Form.
Script.
Putting Form and Script Files on the Server.
Reducing File Size.
Use Instancing.
Use Prototypes.
Use the Text Node.
Use Space-Efficient Geometry Nodes.
Use Automatic Normals.
Eliminate White Space.
Round Floating Point Numbers.
Compress Files.
Increasing Rendering Speed.
Simplify the Scene.
Divide and Conquer.
Let the Browser Do Its Job.
Turn Off Collision Detection and Use Collision Proxies.
Use Scripts Efficiently.
Suggested Structure of a VRML File.
Rules for Names.
Anchor.
Appearance.
AudioClip.
Background.
Billboard.
Box.
Collision.
Color.
ColorInterpolator.
Cone.
Coordinate.
CoordinateInterpolator.
Cylinder.
CylinderSensor.
DirectionalLight.
ElevationGrid.
Extrusion.
Fog.
FontStyle.
Group.
ImageTexture.
IndexedFaceSet.
IndexedLineSet.
Inline.
LOD.
Material.
MovieTexture.
NavigationInfo.
Normal.
NormalInterpolator.
OrientationInterpolator.
PixelTexture.
PlaneSensor.
PointLight.
PointSet.
PositionInterpolator.
ProximitySensor.
ScalarInterpolator.
Script.
Shape.
Sound.
Sphere.
SphereSensor.
SpotLight.
Switch.
Text.
TextureCoordinate.
TextureTransform.
TimeSensor.
TouchSensor.
Transform.
Viewpoint.
VisibilitySensor.
WorldInfo.
SFBool.
SFColor and MFColor.
SFFloat and MFFloat.
SFImage.
SFInt 32 and MFInt 32.
SFNode and MFNode.
SFRotation and MFRotation.
SFString and MFString.
SFTime and MFTime.
SFVec2f and MFVec2f.
SFVec3f and MFVec3f.
Java Notes.
Examples.
Locate-Highlighting.
Integer Interpolator.
State Retention.
Viewpoint Binding.
The Virtual Reality Modeling Language (VRML) allows you to describe 3D objects and combine them into scenes and worlds. You can use VRML to create interactive simulations that incorporate animation, motion physics, and real-time, multi-user participation. The virtual landscapes you create can be distributed using the World Wide Web, displayed on another user's computer screen, and explored interactively by remote users. The VRML standard is defined by an advisory committee, the VRML Architecture Group (VAG), which continues to expand the language.
The uses of VRML are as varied as the 3D objects in our world today. Consider these possibilities:
Applications for VRML range from the serious (medical imaging, molecular modeling, engineering and design, architecture), to the more entertaining (games, advertising of all varieties, virtual theme parks), to the mundane realities of everyday life (selecting and placing furniture in the living room, planning a weekend hike at a county park, repairing a carburetor).
VRML is not a programming language like C or Java, nor is it a "markup language" like HTML. It's a modeling language, which means you use it to describe 3D scenes. It's more complex than HTML, but less complex (except for the scripting capability described in Chapter 7) than a programming language.
3D Models versus 2D ImagesVRML provides a highly efficient format for describing simple and complex 3D objects and worlds. It needs to be efficient, since VRML files are sent over slow telephone lines as well as faster ISDN and leased lines, and since the computers used to view the files range from low-end PCs to top-of-the-line supercomputers.
The power of VRML becomes apparent if you compare viewing a 2D image to exploring a VRML world. Suppose, for example, that you have six images of a certain area in San Francisco and a VRML file containing data describing the same general area. The images are flat rectangles showing a particular view of the city. All you can do with them is look at them. Each pixel in each image has a fixed, unchanging value.
With a VRML file, however, you can view the scene from an infinite number of viewpoints. The browser (the software that displays a VRML file) has navigation tools that allow you to travel through the scene, taking as many different paths as you desire, repeating your journey, or exploring new territory according to your whim. The VRML world can also contain animated images, sounds, and movies to further enrich the experience.
Sometimes, too, a 2D image just doesn't convey the same amount of information as a 3D model. For example, consider the diagram shown in the right portion of Plate 21, which illustrates how to assemble a desk.
The left portion of Plate 21 shows a 3D presentation of the same object. The added depth dimension makes it much easier to relate the illustration to the real-world desk pieces lying in the carton. What happens when it's time to put the desk together? With the animation features provided by VRML 2.0, you could create an application that would allow the user to click a part shown on the screen, then watch it snap together with the adjoining pieces. If the user didn't understand what was happening, he or she could click again to separate the pieces, then repeat the process until it made sense. To see how the pieces fit together at the back, the user could turn the part around and view it from the desired angle.
Cutting-Edge TechnologyWhether the final goal is educational, commercial, or technical, most compelling VRML worlds have certain characteristics in common:
The user enters this 3D world on the computer screen and explores it as he or she would explore part of the real world. Each person can chart a different course through this world.
The local browser allows the user to explore the VRML world in any way he or she decides. The computer doesn't provide a fixed set of choices or prescribe which path to follow, although the VRML author can suggest recommendations. The possibilities are unlimited.
Objects in the world can respond to one another and to external events caused by the user. The user can "reach in" to the scene and change elements in it.
VRML is a powerful tool, but like all power tools, it must be used carefully and effectively. If you've already started to explore different VRML sites on the Web, you've probably been impressed with the beauty and creativity of the best sites, and disappointed in the content and painful slowness of others. Because VRML technology is relatively new, designers and programmers are just learning how to work with it. Authoring tools are still under development, so VRML authors have to rely on doing some things "by hand" until a well developed set of tools exists for all platforms. Chapter 10, "Improving Performance," addresses the issues of performance and efficiency, which are key concerns for effective use of this evolving language and technology.
A Brief Look at the Development of VRMLA major goal of 3D computer graphics has long been to create a realistic-looking world on a computer screen. As long ago as 1965, Ivan Sutherland suggested that the ultimate computer display would "make the picture...look real [and] sound real and the objects act real" (quote