The name ActiveVRML refers to both a language and the ActiveX control that embodies that language. As the name implies, this technology was initially an outgrowth of virtual reality modeling language (VRML), a language for describing 3D scenes and objects. VRML gained instant acceptance within the Internet community when initially released in 1995, but the limitations of this specification were immediately obvious. Virtual worlds contained only static, nonmoving objects and provided no consistent way to handle user interactions. A surprising number of companies began to offer VRML "browsers," and each product had a different approach to the problems of interactivity and dynamic behaviors. There was clear consensus within the VRML community that a new standard was required to bring this great technology forward along its next evolutionary step.
Microsoft offered ActiveVRML as a successor to the VRML 1.0 specification and so joined the handful of organizations that had developed proposals for VRML 2.0. Although a competing specification was eventually chosen by the standards body, ActiveVRML remains a viable technology because of the unique strengths of the language and the versatility of Microsoft's implementation.
Rather than positioning the ActiveVRML control as a virtual-reality browser, Microsoft now sees it as an engine for active media of all types. By providing a rich, interactive language that can control images, sound, video, and 3D objects, ActiveVRML (AVRML) stands on its own as a premier Web-enhancement tool.
The language of ActiveVRML was first introduced to the public in December 1995 as part of Microsoft's initial offering of Web software. At a time when the future of VRML was being hotly debated, Microsoft clearly saw this specification as a contender to be the basis of VRML 2.0. Initial comments within the VRML community were extremely positive, and very few technical criticisms were ever leveled at the proposal.
Respect for the superiority of the language stems from both its completeness and the elegance with which it solves extremely complex behavioral problems. This well thought-out solution did not develop overnight by any means; rather, it is the product of many years of continuing research.
An important point to stress is that ActiveVRML is not a general-purpose language, such as C++ or Java. It is intended to address only one particular set of problems, but it does that very well. Because of this focus, AVRML is remarkably simple to learn and use compared to a language like Java.
True appreciation of this language, however, comes only from seeing it demonstrated live. Images, video, sound, text, and 3D shapes can all be blended and mixed in astonishing combinations. Start with a cube, and let it spin as it bounces around the screen. Then attach a sound that will spatially follow the cube, using stereo speakers to accurately trace its position. Give the cube some texture, but then go ahead and make one side a video screen for streaming MPEG data. Add 3D details to the cube to make it look like a TV set; then place it in a virtual living room. When the user clicks the nearby virtual remote, happily change video "channels." Then a beer floats in from the kitchen along a smooth spine trajectory, decelerating to stop in your virtual hand. There is no end to what could be imagined and created with these sorts of capabilities, but this is exactly what the language promises and what the ActiveVRML control delivers.
The current ActiveX Control implementation of ActiveVRML, released by Microsoft, is still the only AVRML product available, although the language is open for third-party exploitation. This control makes use of several other cutting-edge technologies such as Direct3D and URL Moniker support.
The last published version of ActiveVRML was still an Alpha release. This version will only run with the alpha version of Internet Explorer 3.0, which was released in April 1995. This ActiveVRML release was also not designed to work with the final release version of DirectX II, which incorporates the Direct3D rendering engine. Instead, use the prerelease version of Direct3D which is packaged with the ActiveVRML control.
Although the ActiveVRML control is still in Alpha release, it demonstrates much of the power of the language specification. But what really makes it stand out is the speed and responsiveness with which it displays animations of both text and images. Compared to the jerky, halting quality of existing Web-based presentation tools, such as Macromedia's Shockwave or Microsoft's PowerPoint viewer, the AVRML control is truly astounding in its performance capability.
Beyond its usefulness as a Web-enhancement tool, ActiveVRML represents the crowning achievement of Microsoft's component object model (COM) interface. The underlying advantage of AVRML is the language itself, but it is built on the solid foundations of several key technologies, all brought together with COM. Most notable are DirectDraw and Direct3D, which should serve as clear examples of just how efficient this interface can be for performance-demanding applications. Communications are likewise built on new COM objects that handle asynchronous downloading of data.
The huge initial response to VRML has waned somewhat in the last year, largely due to the public's realization of the inherent limitations in this specification. People soon tire of wandering around virtual worlds where nothing moves, nothing reacts, everything is quiet, and there are no other people to interact with. Some of the proprietary solutions to these issues have been quite remarkable, but it remains to be seen whether VRML 2.0 will recapture the momentum initiated by its predecessor.
Ironically, the main limitation of VRML 1.0 will also be its enduring strength. As an accepted standard for describing static 3D objects, the specification forms a common point of interaction between all new modeling and animation software tools. The lack of behavioral programming is no obstacle when you are just seeking a platform-independent way to pass model data back and forth. The complex and time-consuming task of supporting VRML 2.0 is simply not required for this purpose.
Philosophically speaking, there is a great divide between the ActiveVRML language and the VRML 2.0 specification called "Moving Worlds, which grew from a joint proposal from Sun, Netscape, and others. One of the chief distinctions is VRML 2.0's forced reliance on external scripts using the Java language. This separation of model definitions from behavioral definitions adds tremendous complexity to the overall system, and consequently it is much more difficult for developers to implement applications based on this standard. Compared to the explosion of VRML browsers seen before and after the release of the 1.0 spec, only a few companies have announced products that support VRML 2.0.
ActiveVRML's complete integration of media definitions (including 3D objects) and behaviors is the cornerstone of its elegance. This direct approach has the added benefit of simplifying the task of writing authoring tools. Microsoft has done its part to provide a control for displaying any given AVRML file, but it will fall to third-party software vendors to create great tools for producing these files in the first place. Translating a user's vision into actual lines of code can be very difficult with a general-purpose language like Java, but it is an extremely natural process with AVRML.
It's just this sort of strength that will allow ActiveVRML to persist, though it was seemingly defeated by VRML 2.0. Few people will ever take up the challenge of learning Java just to make a cube spin around in space. Average people will be able to create active, interactive, 3D worlds only when the tools and the process are intuitive enough. Those tools can be more readily developed with AVRML.
Beyond the issues of dynamic 3D worlds, ActiveVRML possesses other key capabilities that put it in a class of its own. The seamless way it works with all different types of media, not just 3D objects, is remarkable. Also notable is the time-based nature of all behaviors, which assures an author that viewers will see activity in a similar way regardless of the processing power found in the viewer's computer. These distinctions from VRML 2.0 serve to show the holes still left in that specification.
Installing and using ActiveX controls is supposed to be the simplest task a user could ever face. In most cases, a user might never be aware that a program had been downloaded and was running within a Web page. Unfortunately, the ActiveVRML control falls a little short of that goal. Most users of Internet Explorer 3.0 will have to pay attention to a few important issues before they can begin animating their sites.
Unlike most other ActiveX controls, ActiveVRML is not designed to be installed automatically using the Component Download API. A more elaborate installation is required because of the product's reliance on DirectX. At some future point, DirectX will probably ship as part of the Windows operating system, allowing controls like AVRML to be more easily downloaded. But in the meantime, the installation program must be downloaded (or loaded from the CD) and manually run by the user.
Another thing to be aware of is that the current Alpha release of ActiveVRML does not install the control's OCX file in the \System\Occache directory, where controls normally reside; instead, it places the required files into a separate folder within the Internet Explorer directory. Little quirks like this are understandable when you realize this program was among the very first ActiveX controls ever released, long before the current specifications had been finalized. Certainly, the next version of AVRML will take full advantage of the finalized ActiveX standards.
The 3D rendering capabilities of ActiveVRML are built on top of the new Direct3D engine that is a part of DirectX II and DirectX 3. DirectX is a set of technologies designed to maximize the performance of games and other real-time applications, under both Windows 95 and Windows NT 4.0. The DirectX API gives programmers powerful access directly to the video and sound cards, and it takes full advantage of any special hardware capabilities found on those devices. In the case of Direct3D, that means computer systems with advanced 3D accelerated graphics cards will see a dramatic boost in speed and/or image quality. This new breed of graphics card is becoming more and more common these days, but for those computers with ordinary graphics, Direct3D still turns out a top-notch performance.
The current (alpha) release of the ActiveVRML control is not compatible with the final version of DirectX II or with DirectX 3. The AVRML setup program will automatically install a compatible beta version of DirectX II, but any newer versions must first be removed.
Like other ActiveX controls, ActiveVRML can be embedded into an HTML document by using the OBJECT tag. The control has its very own globally unique identifier (GUID), which serves to tell your browser precisely what script you want loaded. Here is a sample use of the OBJECT tag that might be used to invoke the testcode.avr script:
<OBJECT CLASSID="clsid:{389C2960-3640-11CF-9294-00AA00B8A733}" ID="AVView" WIDTH=256 HEIGHT=256> <PARAM NAME="DataPath" VALUE="testcode.avr"> <PARAM NAME="Expression" VALUE="model"> <PARAM NAME="Border" VALUE=FALSE> </OBJECT>
If you have ever incorporated ActiveX controls into HTML before, two things will be noticeable immediately about this sample. The first is the outdated syntax of the CLASSID parameter. The outdated syntax would be
CLASSID="clsid:{389C2960-3640-11CF-9294-00AA00B8A733}"
More recent code would use this syntax:
CLASSID="clsid:389C2960-3640-11CF-9294-00AA00B8A733"
The exact syntax of the OBJECT tag has been something of a moving target as the development of Internet Explorer 3.0 edged towards completion. The specification for this tag is not under the control of Microsoft; it is the responsibility of the World Wide Web Consortium (W3C) which acts as an impartial standards organization for many Web-based technologies (see http://www.w3.org)
This is not a big deal, but you should keep it in mind if you are writing HTML for both ActiveVRML and other ActiveX controls. Also, be aware that when the next release of ActiveVRML comes out, your HTML source code will have to be revised to use the newer syntax.
Astute readers will note that there is no CODEBASE parameter. Normally, this information would tell the browser where to find a copy of the control for downloading, if it was not already installed in the system. Because this control does not yet support the Component Download model for automatic installation, using CODEBASE would be meaningless. The next release of AVRML will conform fully with the new specifications.
The WIDTH and HEIGHT parameters are used just as with any control to specify the size of the screen area used. With ActiveVRML, however, there is an additional consideration when planning for the size of your active display. DirectDraw and Direct3D each require significant portions of video memory to store special-purpose buffers. The larger the desired display, the more video memory is required. Most graphics cards come with 2M of memory these days; many carry 4M, and a very few have 8M. But a tremendous number of cards are still being used that only have 1M of video memorytypically on older 486 systems. Web-site builders have to be aware of who their target audience is and what the minimum hardware requirements are. In general, a 256*256 display will work for most of the PCs on the Internet today. Predicting exact memory usage is difficult because of the wide variety of features and behaviors found on each of the cards available these days, and it further depends on the design of the video driver, the user's current display resolution, and the color depth being used.
In addition to the standard parameters supported by the OBJECT tag itself, ActiveVRML accepts several other custom values passed through the PARAM tag. The names of these parameters match the exposed COM interfaces supported by AVRML, and they are used for setting up the initial state of the control. The foremost of these is DataPath, which specifies the URL of the ActiveVRML script that needs to be loaded and executed. Note that this script is not loaded by the browser but by the control itself using URL Moniker objects.
The Expression parameter refers to an expression found within the specified ActiveVRML file (AVR file). If this expression does not have a match in the target script, an error will be generated and nothing will be displayed. This AVRML interface can be dynamically controlled to change which expression is currently evaluated, for files that may contain multiple expressions.
The meaning of the Border parameter is pretty self-explanatory. You can enable a thin border to frame the AVRML control, or you can leave it off. It's absolutely up to you.
Even though the ActiveVRML language has many advantages over Java, BASIC, and even C++ for the purposes of describing active media content, there are times when these general-purposes languages can come in handy. Tasks like database integration, coordinating multiuser environments, or communicating with other controls all require functionality that is not found in AVRML. However, the flexibility of AVRML's COM interface provides easy integration with any language that fits Microsoft's scripting object model. Currently that means Visual Basic Script and JavaScript. Through such an intermediary script, AVRML can also interact with other ActiveX controls and even the browser itself.
Script interactions with the ActiveVRML control flow in two directionsinto the control through both exposed properties and externally fired events and from the control through internally generated events. Custom-defined events provide an extremely flexible way to tie together an external script and an AVR file.
The ActiveVRML control has properties that expose a fixed set of functionality. The following list names the exposed methods that can be manipulated by external scripts or other COM-aware programs, along with the data type they each accept.
The DataPath, Expression, and Border methods correspond to the PARAM tags used to initialize ActiveVRML. It is possible to change these values dynamically, but this action requires use of the SetFrozen method. When the Frozen property is TRUE, evaluation of the current expression is halted. After the changes to DataPath and/or Expression have been made, the control can be unfrozen again.
Here is a sample of HTML that uses Visual Basic Script to choose between two different AVR files for viewing:
<HTML> <HEAD> <TITLE>ActiveVRML Test Page</TITLE> </HEAD> <BODY> <OBJECT ID="AVRCtrl" CLASSID="clsid:{389C2960-3640-11CF-9294-00AA00B8A733}" WIDTH=256 HEIGHT=256> <PARAM NAME="DataPath" VALUE="File1.avr"> <PARAM NAME="Expression" VALUE="model"> <PARAM NAME="Border" VALUE=TRUE> </OBJECT> <BR> <INPUT NAME="File1" TYPE=Button VALUE="View File #1"> <INPUT NAME="File2" TYPE=Button VALUE="View File #2"> <SCRIPT LANGUAGE="VBScript"><!-- sub File1_onClick AVRCtrl.Frozen = TRUE AVRCtrl.DataPath = "File1.avr" AVRCtrl.Frozen = FALSE end sub sub File2_onClick AVRCtrl.Frozen = TRUE AVRCtrl.DataPath = "File2.avr" AVRCtrl.Frozen = FALSE end sub --></SCRIPT> </BODY> </HTML>
If you prefer to use JavaScript, the SCRIPT tag would only have to be slightly modified, and the rest of the HTML would be unchanged. The JavaScript version would look something like this:
<SCRIPT LANGUAGE="JavaScript"><!-- function File1_onClick () { AVRCtrl.Frozen = TRUE; AVRCtrl.DataPath = "File1.avr"; AVRCtrl.Frozen = FALSE; } function File2_onClick () { AVRCtrl.Frozen = TRUE; AVRCtrl.DataPath = "File2.avr"; AVRCtrl.Frozen = FALSE; } --></SCRIPT>
This sample made several assumptions for simplicity. Because the script does not specify a new Expression, the one specified within the OBJECT tag is still in force. Both File1.avr and File2.avr must then implement that expression within the AVR code. You will learn a lot more about how to do that a little later on. Also, by not specifying a full URL to the AVR files, the programmer is assuming that they reside in the same directory as the HTML document itself.
The method that has the most interesting possibilities by far is FireImportedEvent, which allows a script to trigger a custom event within the running AVR code. Each event to be used will have its own unique identifier, defined in both the external script and within the AVR file. Optional data may be passed along with an event and can be either a string or a double floating-point value. Here is an example of how to fire a custom event from Visual Basic Script:
<SCRIPT LANGUAGE="VBScript"><!-- sub File1_onClick AVRCtrl.FireImportedEvent (1, "Event Number One") end sub --></SCRIPT>
Sending commands to ActiveVRML is important, but equally so is the ability to receive information back. This functionality is implemented through AVRML's ActiveVRMLEvent interface, which is triggered internally and meant to be handled by an external script. As with external triggers, data may be passed along, this time coming from AVRML instead of to it. Visual Basic Script would typically handle this event with the following syntax:
<SCRIPT FOR="AVRCtrl" EVENT="ActiveVRMLEvent(EventID, Param)" LANGUAGE="VBScript"><!-- Select Case EventID Case 1 Text1.Text = Param Case 2 Text2.Text = Param End Select --></SCRIPT>
The Param value can be either a string or a double float, or it may consist of nothing. EventID is an identifying integer.
The JavaScript version of the same code is:
<SCRIPT FOR="AVRCtrl" EVENT="ActiveVRMLEvent(EventID, Param)" LANGUAGE="JavaScript"><!-- switch (EventID) { case 1: Text1.Text = Param; break; case 2: Text2.Text = Param; break; } --></SCRIPT>
So far, this chapter has looked at how to use ActiveVRML without talking much about why you're using it. Creative Web designers already have lots of experience using the traditional media types found on the Internet, and AVRML builds on that foundation of existing content with the added dimension of dynamic activity.
The ActiveVRML language relies heavily on the capability of importing content in a wide variety of data formats. As shown in Table 17.1 images, sounds, and 3D objects are brought into the AVRML scene where you can manipulate them at will.
Images and video are internally rendered onto DirectDraw surfaces, which can then be used as textures for 3D objects. Surfaces can also be manipulated and combined with other surfaces to create stunning special effects. Within an AVR script, this is how to import an image:
myimage = import ("picture.jpg");
The ActiveVRML function import accepts the URL of a media resource and returns a value used to define myimage. This value has a type that depends on the media format being imported. For instance, imported GIF files produce the type image, and WAV files generate the type sound. The concept of types is very important to AVRML and is explored in the next sections of this chapter. The resulting definition, myimage, can now be used in any expression that accepts type image.
You can do some powerful things with image types, such as performing cropping, setting opacity, and tiling the image over an area. But the most compelling functionality is the ability to apply 2D transformations such as scaling, rotating, and translating. Individually composited images can also be grouped into a montage type. A montage maintains a depth value for each image in the collection and can combine the various layers to form a final rendering value (of type image, of course). With just these capabilities and nothing else, ActiveVRML would be a really useful tool for laying out Web pages. And you haven't even gotten near the good parts yet, so hold on.
Once you've played with the spatially oriented sounds in ActiveVRML, boring 2D sound will never again be enough. By combining a sound type with a geometry type, which carries with it three-dimensional positioning information, a sound is reproduced in 3D space. The effect requires stereo speakers, of course. Consider the example of a fighter plane model crossing the screen from left to right with jet engines roaring. The sound will track the rendered image as it moves, and then attenuate as the virtual plane moves into the distance. Even the Doppler effect from swiftly moving sound sources is approximated. The truly wonderful aspect of this technology is that it is all done for you; ActiveVRML takes care of all the hard work after you've made a few simple definitions.
You can manipulate sounds in other ways as well. The looping and mixing functions of ActiveVRML make it easy to produce the complex sound behaviors needed for immersive simulations. Continuous ambient tracks are seamlessly mixed with event-driven sounds, and the Web designer does not have to mess with the details. One of the most extraordinary features of sound types is their capability of dynamically adjusting playback rates. This is a great way to transform a deep voice into a chipmunk, or the reverse.
Even though ActiveVRML was once positioned as the successor to VRML, AVRML's support for 3D primitives is limited to importing existing VRML models (*.wrl files). Imported files produce values of type geometry and are defined as in this sample:
(mymodel, minExtent, maxExtent) = import ("testmodel.wrl");
Two additional definitions, minExtent and maxExtent, are of type point3 and contain information about the bounding box of the imported geometry. It must be noted that mymodel now refers to the entire VRML file, regardless of how many individual objects are contained within that file. This is a serious drawback if you want to manipulate specific objects, but you can overcome it with a little bit of good planning when you design the VRML content.
VRML worlds are formed from a collection of components, called nodes, which are arranged into a hierarchical structure known as a scene-graph. Some nodes in the scene-graph refer to geometrical primitives, and others describe textures, lights, cameras, and transformations. Each node possesses properties that have been passed down from upper layers of the hierarchy, and in turn, it applies those properties to the nodes below it. A given texture node, for example, may be applied to several cubes, whereas some nearby spheres follow a different texture node. A complex world may contain hundreds or thousands of nodes, each of which represents some aspect of the virtual scene. Unfortunately, ActiveVRML treats all the nodes as a single unit and cannot control the individual pieces.
Not only does this prevent you from treating nodes as individual entities, but if your AVRML script calls for a texture to be applied to an imported model, it will be applied to all geometries found in that file's scene graph. You may have wanted to wallpaper the virtual living room, but you end up papering the TV and sofa too.
Although ActiveVRML accurately renders imported VRML 1.0 worlds, all of the original properties of the VRML geometries are superseded by any properties applied by the ActiveVRML script. AVRML textures take precedence over VRML textures, for instance.
The way to handle all this is pretty simple, as long as you don't have to work with a lot of pre-existing scenes. Even then, your modeling tools should be able to break up a complex scene into individual components. The idea is to keep the VRML files very simple, one geometry per file, and then import all the required 3D objects as true individual entities. It does not matter if textures and colors are specified in the VRML or AVRML side, but a consistent approach would keep designs much simpler. In general, it is probably best not to use any VRML functionality if it can be duplicated with AVRML.
Once you have imported a model, you can aggregate it with other 3D objects to form groups that can be collectively manipulated. These other objects could be imported shapes but could also be lights or sounds.
A quick word about aggregation: many basic types, such as geometry, have the ability to be joined with objects of similar type. The way that the resulting aggregate is formed depends on the types involved. Combined geometry objects are the geometric union of all enclosed shapes and lights, whereas combined sound objects mix their individual tracks to form a single output. Values of type image are combined by joining one image over the other. The ability to create complex compositions from simple components is one of the strengths of the ActiveVRML language, as you will see in the next section.
By now, you are probably eager to see ActiveVRML at work and to try out some of your own AVRML scripting. The following sample file will walk you through the complete steps required and will highlight some of the other key elements of the language.
Here is the enclosing HTML framework (see Listing 17.1) that will invoke this sample. It is kept as simple as possible, to allow focusing on the actual AVRML script.
Listing 17.1. Saucer1.html.
<HTML> <HEAD><TITLE>Saucer1 Test Page</TITLE></HEAD> <BODY BGCOLOR=WHITE><CENTER> <OBJECT ID="AVRCtrl" CLASSID="clsid:{389C2960-3640-11CF-9294-00AA00B8A733}" WIDTH=300 HEIGHT=150> <PARAM NAME="DataPath" VALUE="sample1.avr"> <PARAM NAME="Expression" VALUE="model"> <PARAM NAME="Border" VALUE=TRUE> </OBJECT> </CENTER></BODY> </HTML>
As you can see, it really doesn't take much code to implement the AVRML control, unless you choose to interface with HTML in a more dynamic way, which you will do later on.
This first script is designed to simulate something most of us have encountered at some time or another: low-level flying saucers. Our virtual UFO will skim along the mountain tops, emitting eerily authentic sound effects. When the script is run, the results are as shown in Figure 17.1.
Figure 17.1. The results of Saucer1.avr.
Make sure that this script (see Listing 17.2) is located in the same directory as the enclosing HTML file. If it is not, the URL specified in the HTML file must be changed to reflect the actual location of the AVR file.
Listing 17.2. Saucer1.avr.
1: // ActiveVRML 1.0 ASCII 2: // Saucer1 Sample Script 3: 4: // The first step is to import the raw media for our sample 5: ship = first (import ("shipred.gif")); 6: mountain = first (import("mountain.gif")); 7: clouds = first (import("clouds.gif")); 8: 9: // Define a 2D transformation to describe the saucer's position 10: movement = translate (0.005, 0.007); 11: 12: // Define a new saucer object that has movement applied to it 13: saucer = transformImage (movement, ship); 14: 15: // Define the expression used for display output 16: model = mountain over saucer over clouds;
The line numbers in this script are for reference purposes only, and should not be typed in any actual script.
This initial script demonstrates some of the essential elements of an ActiveVRML program. Line 1 is a required comment that identifies this file as containing an AVRML 1.0 program, formatted with plain ASCII. This is the only kind of file that the control will accept.
Lines 5, 6, and 7 define three objects formed from imported graphics files. These objects are implicitly declared as being of type image. The import function actually returns three values, but the first function filters out all values but the image value. Later on, this example will show how to use the additional values, which specify the image's size and resolution.
In Line 10, another data type is defined, called a transform2. This transformation type is applied to two-dimensional points, vectors, and images to specify how they are positioned in the plane of the screen. The declared object, movement, is assumed to be a transform2 because that is the return type of the translate function when called with two number parameters. Later uses of this definition must be within a context that also implies it is of type transform2, or type mismatch errors will occur.
The previously defined transformation is now applied to the desired image using the transformImage function, as seen in Line 13. Actually, it is more correct to say that an entirely new object is declared that has all the same properties as the source image but has been modified by the transform2 object. In this case, the values used to set up this transformation are a couple of hard-coded numbers that place the saucer above and to the right of the screen's center.
Line 16, the final line of the script, provides an expression for the definition model. This definition is the same one specified in the HTML file, within the Expression parameter of the OBJECT tag. The value of this definition is evaluated to produce the current display. There could be several such definitions that provide top-level directives, but only one is active at any given time. In the case of this sample, the expression is evaluated to be an image composed of three successive layers built using the image type's over function.
Use of the over function introduces the concept of composition, one of the primary features of the ActiveVRML language. Composition enables complex objects and behaviors to be assembled from simpler definitions.
At this point, you are ready to run the control to see the script in action, which you can do by opening the Saucer.html file. Actually, the term action is used loosely, because it doesn't really do anything active at this point. You'll fix that next.
If you introduced any mistakes into the script as you prepared it for the ActiveVRML control, by now you have seen the error-handling capabilities the control possesses. A dialog is displayed to indicate both the row and column numbers where the error occurred. In a case of multiple errors in your script, however, only the first error is actually reported.
So far, this is easy, but that's because nothing is happening yet. Listing 17.3 shows how a few extensions to the previous script can add both rudimentary motion and some dynamic appearance changes. The flying saucer will now travel from the left side of the screen across to the right, while alternately flashing a red and blue light. For the sake of simplicity in this sample, nothing special happens when the saucer goes past the right side of the screen (and off into the void, maybe?). Notice the cool effect of having the saucer glide behind the mountain peaks, thanks to the use of layering in the design.
Listing 17.3. Saucer2.avr.
1: // ActiveVRML 1.0 ASCII 2: // Saucer2 Sample Script 3: 4: // The first step is to import the raw media for our sample 5: shipred = first (import ("shipred.gif")); 6: shipblue = first (import ("shipblue.gif")); 7: mountain = first (import("mountain.gif")); 8: clouds = first (import("clouds.gif")); 9: 10: // The saucer will alternate between two different images 11: saucer = shipred until predicate (time > 1) => 12: (shipblue until predicate (time > 1) => saucer); 13: 14: // Define a 2D transformation to describe the saucer's position 15: movement = translate (0.005 * time - 0.05, 0.007); 16: 17: // Define a new saucer object that has movement applied to it 18: activesaucer = transformImage (movement, saucer); 19: 20: // Define the expression used for display output 21: model = mountain over activesaucer over clouds;
As you can see in Lines 5 and 6, there are now two versions of the saucer image defined. One has a red light on top and the other has blue. The definition that spans Lines 11 and 12 sets up a simple behavior that causes the displayed saucer to alternate between the two images once per second.
Having a periodic behavior implies that time can be examined in ActiveVRML. In fact, time is one of the most important concepts in AVRML because all values are time-varying. Not just numbers, but images, geometries, and other types can all be considered forms of behaviors because they have the capacity to change with time.
Temporal manipulations rely on the implicit time property associated with every behavior. The value of time is of type number, and it represents the behavior's local elapsed time in seconds, starting at 0. This is where it starts to get tricky, so read carefully. A behavior's time is initialized whenever that behavior's definition changes. In the example, saucer is a behavior with a value that alternates between shipred and shipblue, resetting time after each switch.
Always keep in mind that any change to an object's behavior will result in its time restarting at zero.
In this sample, you want the behavior of saucer to change once each second. ActiveVRML provides a way to signal a behavior by triggering an event that is specified in the behavior's definition. Events are a very flexible way to tie together all sorts of reactive behaviors, and several functions are available for manipulating this type. In this instance, the predicate function is used to trigger an event when the specified condition is TRUE, which is when the value of saucer's time is greater than one. This event satisfies the until condition, and a new saucer is created with the definition on the right side of the =>. Here is another example for clarity:
texture = bumpy until (some event) => smooth;
The event does not have to be declared in the same definition as the reacting behavior. Events can have their own defined names, like other data types. These two lines are equivalent to the preceding snippet:
myevent = (some event); texture = bumpy until myevent => smooth;
Now you can see just what is happening in Lines 10 and 11. When the script first begins to run, saucer is initially defined as shipred, and time begins counting from zero. After one second has elapsed, predicate generates an event that changes the definition of saucer to an almost identical definition based on shipblue. The new definition also specifies a reaction to a time-driven event, and after another second elapses, saucer is redefined yet againthis time as itself. What this really means is that you start all over again with the original saucer definition (at time zero, of course). This ability to have self-referential definitions is essential for building recursive functions, as you'll see later on.
The transformation that is defined in Line 15 is also based on time, but in a much more direct way. The value of movement is time-varying, beginning at zero and ever-increasing. Nothing in this script will ever change that behavior, so the value of time is never reset. It is important to note that movement's time is not the same as saucer's timeevery object in ActiveVRML maintains its own relative measurement of time, starting at the moment it was instantiated.
The rest of the script is functionally the same as that in the first version. The value of the output definition, model, is still evaluated as the composition of three separate images regardless of whether the images are static or changing. To stress an important point once again, in ActiveVRML all values are potentially time-varying.
With some of the basic concepts now covered, you can finish off the script with a few dramatic flares. When the flying saucer drifts off the right side of the background image, it will now reappear on the left side. There is also a sinusoidal vertical component to the saucer's wandering flight path. And finally, the sound effects are added to top off the realism of this simulation.
Listing 17.4. Saucer3.avr.
1: // ActiveVRML 1.0 ASCII 2: // Saucer3 Sample Script 3: 4: // The first step is to import the raw media for our sample 5: shipred = first (import ("shipred.gif")); 6: shipblue = first (import ("shipblue.gif")); 7: mountain, extents = import ("mountain.gif"); 8: clouds = first (import ("clouds.gif")); 9: shipnoise = loop (first (import ("ship.wav"))); 10: 11: // The saucer will alternate between two different images 12: saucer = shipred until predicate (time > 1) => 13: (shipblue until predicate (time > 1) => saucer); 14: 15: // Identify the components of the image boundary 16: maxx = xComponent (extents); 17: maxy = yComponent (extents); 18: 19: // Make speed a function of the image size 20: velocityx = maxx / 5; 21: velocityy = velocityx; 22: 23: // Modify the x and y positions 24: xpos = (velocityx * time - maxx) until edgeevent => xpos; 25: ypos = sin (time) * velocityy + (maxy / 4); 26: 27: // Define an event to handle moving off the right side of the image 28: edgeevent = predicate (xpos > maxx); 29: 30: // Define a 2D transformation to describe the saucer's position 31: movement = translate (xpos, ypos); 32: 33: // Define a new saucer object that has movement applied to it 34: activesaucer = transformImage (movement, saucer); 35: 36: // Add some effects to our sound 37: soundgain = ypos / maxy + 0.2; 38: soundeffect = gain (soundgain, rate (2, shipnoise)); 39: 40: // Define the expression used for display output 41: model = mountain over activesaucer over clouds, soundeffect;
The first thing that's noticeably different is the way the mountain image is being imported (Line 3). Instead of only accepting the image type, you are also acquiring the image's size information. By default, an untransformed image is centered at coordinates (0, 0), so the position of any corner is sufficient to compute the overall boundary dimensions. This data is returned in a vector2 type, which can be formed from any two numeric values. This information will be used later to determine when the saucer is moving off the edge of the background picture.
In Line 9, a sound type is imported and defined so that the playback loops continuously. The import function of sound types actually returns values for both the left and right channels, and for the duration of the longest channel. If the imported sound is monophonic, the same value is given to both channels. This script does not need to process individual sound channels, so the first function is used again to restrict the return values.
In the next several lines, the saucer's behavior is constructed from a few simple components. Definitions in Lines 16 and 17 extract individual x and y boundary values (as type number) from the original vector2. Then some velocity values are defined that depend on the size of the boundary image. The horizontal and vertical velocities are maintained separately, with chosen values that result in a 10-second horizontal traversal of the screen.
Line 24 is the actual definition for the horizontal position of the saucer. When time is zero, this value places the saucer at the left side of the screen. After the ship passes over the right side, edgeevent is triggered to change the behavior of xpos. The new definition is recursive, which in this case simply serves to reset time to zero.
Vertical position follows a sin wave, as calculated in Line 25. ActiveVRML has a full range of mathematical functions, such as sin, which you can use to construct very complex behaviors.
Next, edgeevent is defined, which is used to possibly modify the value of the horizontal position (see Line 24). This event is defined as a simple predicate condition that is triggered when the saucer's position falls across the right boundary.
The horizontal and vertical positions are used in Line 31 to declare a transform2 object. This is similar to the previous sample, but in this case both parameters to the translate function are time-varying behaviors. This transform2 is applied to the static image of the flying saucer, resulting in an image appropriately named activesaucer (in Line 34).
You could have stopped right there and had a pretty good simulation, but it turns out that the audio sample doesn't actually sound much like an alien spaceship. No problem! In Lines 37 and 38, you not only speed up the playback rate (by a factor of 2), but you also tie the playback volume to the saucer's altitude on screen.
The last definition shows how a sound channel is simply added as another parameter to the output expression. If there were two sound channels, they would both be added as parameters.
Now that you've seen some brief examples of how ActiveVRML is structured, the rest of the chapter can go into some detail about the specifics of the language.
With some idea now of how ActiveVRML scripts are actually written and implemented, learning some of the more specific details will be easy. This section will focus on the elements of the language and the different ways those elements can be assembled.
This section is not intended as an exhaustive reference, but as a synopsis of the general functionality found in the ActiveVRML language. The final release of the ActiveVRML control will include a specification containing complete technical details.
An ActiveVRML program is simply a collection of declarations that define objects of various types. These named objects have values that vary with time and in reaction to event-driven interactions with other objects. Declarations are composed of identifiers and expressions that are evaluated with strict attention to object typing. A single declaration must be present that has also been named in the container HTML file; it is the one evaluated to produce any actual output.
There are several fundamental types used in ActiveVRML, and all identifiers and expressions must correctly evaluate to one of these types when the script is compiled. In most cases, the actual type does not have to be specified in the code because AVRML can infer it from the context in which the identifier or expression is used.
Here is how typing is applied to a simple declaration:
bird = import ("bird.gif");
The identifier bird is of type image. The function import is of type image * vector2 * number. You could have extracted the additional returned information like this:
(bird, birdextent, birdresolution) = import ("bird.gif");
It will be assumed that birdextent is of type vector2, and that birdresolution is of type number, when these identifiers are used elsewhere in the script. Incorrect usage always generates a type mismatch error from the AVRML control.
The type associated with a function definition will indicate its return value as well as the input parameters. Take a look at this example:
double = x * 2; result = double (20);
The function product is of type number-> number. This notation indicates that the function accepts a number as a parameter and yields an object of type number.
If the preceding function was a little more general, it could demonstrate the polymorphic properties of types:
product (x, y) = x * y;
The function type is now (a, a) -> a, where a indicates a generic type identifier, so the parameters and return value can be of any type that supports the * operator, for example:
result = product (20, 2); scaledvector = product (2, vector2Xy (20, 20));
The first case demonstrates a usage of number * number-> number and the second case shows an alternate usage: number * vector2 -> vector2.
You can directly specify an identifier's type when it is declared, by following the name with : type as in this definition:
product (x : number, y : number) = x * y;
Sometimes the ActiveVRML control is not able to compile your script when it encounters type ambiguities that it cannot resolve. This is not necessarily an error in your code, but it will force you to explicitly type the offending definition. In general though, AVRML is very good at determining types just from context alone. Whether or not your objects are explicitly typed, as a designer you must always be aware of object types so you can use them correctly throughout the script.
The image type is of primary importance to ActiveVRML scripts and is the ultimate type to which the output expression must evaluate. An image does not represent a static picture but is instead a behavior that can vary over time, for instance:
plainimage = import ("parrot.gif"); opaqueimage = opacity (1 / time, plainimage);
The import function, which you've seen many times now, is a constructor function of the image type. opacity is another image function, of type number * image -> image, which is used to construct a new object that has a specified opacity value between 0.0 and 1.0 (0.0 is completely transparent). In this case, the resulting image will begin to fade away immediately, as its opacity drops towards zero with the passing of local time.
There are two other methods to create image objects, using 3D shapes and text. A geometry object is used in conjunction with a camera object to produce an image using the function renderedImage (type geometry * camera -> image. See the next section for more details about this process. An image can also be produced using the renderedText function (text -> image * point2), which returns both an image and the point2 coordinate of the upper right corner of the resulting text.
Combining images, as with a foreground and background, is extremely simple with the over function (type image * image -> image), which combines any two source images to produce a new third image object. All images are centered at (0, 0) by default, so overlapped images should first be translated into position. If you had two images, each 100*100 pixels in size, and you wanted to place them on the screen side by side, you could use the following code:
parrot = import ("parrot.gif"); newparrot = transformImage (translate (-50,0), parrot); raven = import ("raven.jpg"); newraven = transformImage (translate (50,0), raven); birds = newparrot over newraven;
The function transformImage is of type transform2 * image -> image, and so it requires a transform2 object. In this case, translate was used to produce an appropriate object, but many other transformation functions are available and are discussed in a following section. If the translation was changed so that the images overlapped slightly, newraven would be obscured by newparrot. On the other hand, if newparrot had been defined with some level of opacity, then the images could actually be blended.
The remaining two image functions are crop and tile, each of type point2 * point2 * image -> image. In both functions, the two point2 parameters define a rectangular patch of the source image by specifying the area's minimum and maximum extents. In the case of crop, the supplied image is cropped to the indicated size. The tile function also crops the original image to the indicated size, but then it also replicates the image across the entire new surface, which has infinite extent.
The geometry type is what puts the "VRML" in ActiveVRML. Geometries are behaviors, like image types, in the sense that their values are time-varying and reactive. The 3D nature of geometry objects allows them to be manipulated by a rich set of transformations, but it does not specify how the results are to be viewed on a 2D display screen. That is controlled by the image function renderedGeometry (see the Image Type section).
The current version of AVRML has only one way to create visible geometry objects, which is to import the data from VRML 1.0 files. Although the internal representation of this type is comprised of vertex and triangle definitions, there is no way to manipulate these individual data points; they can only be modified as a whole. Here is how data can be loaded and displayed:
parrot3d = import ("parrot.wrl"); parrotimage = renderedGeometry (parrot3d, defaultCamera);
ActiveVRML uses a Cartesian right-handed coordinate system to describe the three-dimensional virtual environment. This means that the positive X axis points to the right of the screen, the positive Y axis points up, and the Z axis grows out of the screen towards the viewer. When you are looking along an object's axis of rotation in its positive direction, the object appears to turn clockwise when positive rotation is applied.
Imported geometries are combined into larger aggregates using the union function, of type geometry * geometry -> geometry. The resulting definition can be used to collectively manage a group of objects, each of which is still individually defined.
The most commonly used function of a geometry type is transformGeometry (type transform3 * geometry -> geometry), which applies a given transform3 object to a specified geometry. This function works in a very similar way to the image function transformImage, but here the functionality is extended into a three-dimensional environment.
Also similar to the image type is the geometry function opacity3, which specifies the degree of transparency applied to a visible geometry. It is of type number * geometry -> geometry.
The material appearance of a rendered geometry object is determined by the combination of light and the shading properties of the object. These properties are controlled with five related functions:
One of the most powerful features of ActiveVRML's 3D rendering is the ability to use texture maps on geometry objects. Texture maps are built with image objects and are applied to a given shape using texture coordinates specified in the original VRML file. AVRML does not calculate any default texture coordinates if none are available, so the texture is simply not drawn. Texture mapping is applied using the texture function, of type image * geometry -> geometry. Here is an example usage:
feathers = import ("feathers.gif"); parrot3d = import ("parrot.wrl"); featheredparrot = texture (feathers, parrot3d); parrotimage = renderedGeometry (featheredparrot, deafultCamera);
Because image objects are essentially behaviors, a texture is also a behavior, and it is capable of changing over time. This sample shows how a model's texture could be morphed from one image to another:
greenskin = import ("gskin.jpg"); brownskin = import ("bskin.jpg"); lizard3d = import ("lizard.wrl"); skin =opacity (1 / time, greenskin) over brownskin; lizard = texture (skin, lizard3d); lizardimage = renderedGeometry (lizard, defaultCamera);
There are two special kinds of geometries that are not visibly rendered: lights and spatially oriented sounds. Because these objects are of type geometry, they have the capability to be manipulated with full three-dimensional flexibility.
Lights come in four varieties, each with its own constructor function:
The ActiveVRML language specification states that lights only illuminate the other geometries that they are aggregated with. This feature is not supported by many rendering engines, including Direct3D. That means that an instantiated light will influence all the objects in the current scene, whether or not they have been joined with the union function.
Light geometries can have customized color and attenuation characteristics. The function lightColor (type color * geometry ->geometry) is used to change the color of a single light, or all of the lights in a defined aggregate, from the default color of white. The way that light drops off over distance (if at all) is controlled with the function lightAttenuation, of type number * number * number * geometry -> geometry. The three number parameters, c, l, and q, are the coefficients of the equation 1 / (c + ld + qdd), that yields the intensity with which an object at distance d is illuminated. The default values are (1, 0, 0), which results in no attenuation at all.
Spatially oriented sound is a great feature that is implemented with a special geometry type created with the soundSource function (type sound -> geometry). The location of the sound can then be directly modified by transformations, or it can be aggregated with some other geometry object.
Image objects are not the only types that you can manipulate with two-dimensional transformations. Points and vectors can also be defined, each of which has two components for describing X and Y coordinates. Even though these types have no visible appearance, they are useful tools for building complex behaviors.
The most common way to construct a 2D point is with the function point2Xy (type number * number -> point2), or a default point at the origin can be created with origin2 (type point2). Points may be added and subtracted, the separation between two points can be retrieved with distance (type point2 * point2 -> number), and a point can be transformed with the transformPoint2 function (type transform2 * point2 -> point2). The components of a point are extracted with xComponent (type point2 -> number) and yComponent (type point2 -> number).
Vectors also have an X and Y component, and they are constructed with the function vector2Xy (type number * number -> vector2). Many behaviors simply need vectors along the X or Y axis, so two special constructors are provided for this purpose, xVector2 and yVector2, each of type vector2. Vector functionality includes length (type vector2 -> number), normal (type vector2 -> vector2), and dot (vector2 * vector2 -> number), as well as the ability to add, subtract, and scale vector values. Transformations are applied with transformVector2, of type transform2 * vector2 -> vector2.
These 2D types have the common ability to be subjected to planar transformations, using functions that require the transform2 type object. Common transformations include translation, rotation, and shearing, or any combination of these used together. For example, to scale an image using the values of a supplied vector, and then rotate it 90 degrees clockwise, this code would suffice:
scaledbird = tranformImage (scale (inputvector), parrotimage); rotatedbird = transformImage (rotate (-1.570796), scaledbird);
The transformations could be combined into one declaration using the o function, which composes multiple transformations. The order in which the transformations is applied is important. This fragment has identical functionality to the previous one:
Transformedbird = transformImage (rotate (-1.570796) o scale (inputvector), parrotimage);
There are 3D points, vectors, and transformations that correspond very closely with their 2D counterparts, both in name and in usage. For instance, the vector constructor vector3Xyz (type number * number * number -> vector3) simply has an additional parameter to describe the Z direction. Like 2D points and vectors, these are not visible but are used instead to manipulate geometry objects. The transform3 type is likewise very similar to a transform2 type, but extends translations, rotations, and scaling into 3 dimensions. The comprehensive sample given later in this chapter will focus heavily on these 3D types, so they will not be discussed any further here.
The camera type is used in conjunction with the renderedImage function to render a geometry object onto a 2D surface. This type of object is subject to three-dimensional transformations that determine the characteristics of the 3D-to-2D conversion.
You can think of the camera object as a projection plane through which an observer views a scene from a specific projection point. In a default camera, created with the defaultCamera function (type camera), the viewer's location is at (0, 0, 1) and the viewing orientation is along the -Z axis through the projection plane at z=0.
Scaling transformations applied to the X and Y axis of the projection plane can squeeze or lengthen a resulting image, and scaling the distance between the projection point and the plane will give control of zoom. Transformations are applied with the transformCamera function, of type transform3 * camera -> camera.
Both geometry and text types can have color properties, as defined by objects of the color type. You can create colors in standard varieties with constructor functions like red (type color) and magenta (type color). Customized colors can be made with colorRgb and colorHsl, both of which are type number * number * number -> color. The parameters refer to red-green-blue or hue-saturation-lightness, respectively. Individual components of an existing color can be extracted with functions like greenComponent (type color -> number) or saturationComponent (type color -> number).
In ActiveVRML, you can associate sound objects with geometry to simulate spatial location, or you may simply define them in terms of the imported data. Importation and manipulation of sound objects has already been discussed in previous sections, but the function renderedSound (type geometry * microphone -> sound) must still be described.
The function renderedSound works in conjunction with the geometry function soundSource to define both the sound-producing and the sound-listening locations. The function result is a sound object that accurately represents this relationship.
The microphone object is created with the defaultMicrophone function (type microphone) at the origin (0, 0, 0), and can be transformed in three-dimensional space like a geometry object using transformMicrophone (type transform3 * microphone -> microphone).
One of the more powerful features that makes ActiveVRML useful for designing Web sites is the ability to draw and manipulate text using a concise but effective set of functionality. You can construct string objects from a supplied string or from a list of characters. They are then used to create text objects that control the actual rendering to an image, based on color and font-family information.
This example demonstrates the majority of the text handling functions:
counttext = numberToString (count, 0); redtext = textColor (red, simpleText ("Count: ") & counttext); sanstext = textFamily (sansSarifProportional, redtext); textimage, textextent = renderedText (bold (sanstext));
The function numberToString (type number * number -> string) will convert the supplied number into a string, using the precision argument to control how many digits will appear after the decimal. The text object is created by the simpleText function (type string -> text) and has color applied with colorText (type color * text -> text). Although the specific font cannot be chosen yet with ActiveVRML, you can control the basic appearance by choosing one of three distinct font familiesserifProportional, sansSerifProportional, or monospaced, as an argument for the textFamily function (type fontFamily * text -> text).
There is no sense of "program flow" when looking at an ActiveVRML script, as there is in most other languages. Instead, each object carries with it an individual sense of time and behavior. Although the value of an object may change over time, or in response to a predefined event, the object's definition never changes during its lifespan. Once all the relationships between defined objects have been set up, everything is let loose to run on its own. An ActiveVRML programmer never needs to know details like how often an expression is re-evaluated, or which objects need updating at some point in timethose complexities are left to the AVRML engine.
Objects relate to each other in several flexible ways within the ActiveVRML language. Each object has an identifier and a value, which are declared once in the script. The value is an expression that can refer to both other objects and also the object being defined, recursively.
Functions can also be declared in ActiveVRML to define more complex relationships between objects than simple expressions could handle. They can have parameters for accepting values, and they can return one or more values as results. Here is one example:
halfdistance (point1, point2) = distance (point1, point2) / 2;
This example shows how multiple values may be returned:
vectorlen (vec) = length (vec), length (vec) / 2; (fulllength, halflength) = vectorlen (vector2Xy (20, 30));
Expressions of the form if-then-else are allowed, which evaluate to one of two branches depending on the Boolean value of the if condition. Here is an example:
getbirdimage (val) = if val = 1 then parrotimage else ravenimage;
Depending on the value of the supplied parameter, the preceding snippet of code would return one of two image values. Both branches must always be present when using this expression form, to correctly handle the Boolean test.
The identifier name associated with an object has global scope by default, meaning that any expression within the script may refer to the object if it has been defined. To reuse a name, a mechanism must be introduced to provide local scoping. In ActiveVRML, that mechanism is an expression form using a let-in syntax.
This functionality allows local declarations to be made within the body of the let expression. These declarations are available to the expression found after the in keyword, but nowhere else in the script. This is an example of how the expression is used:
getposition (startx, starty) = let xpos = startx + time * 3; ypos = starty + time / 2; in point2Xy (xpos, ypos); currentlocation = getposition (15, 20);
The two declarations that we wanted to make local are xpos and ypos. The point2 object named currentlocation is defined in terms of the function getposition, but the actual methods used by getposition are better off hidden from the rest of the script. Some alternate version of this function can now be declared that also uses xpos or ypos, and there would be no conflict.
In this case, some parameters are supplied to initialize the function's behavior. At the moment that currentlocation was first evaluated, the object's value would be (15, 20), the same as the supplied values. But after 2 seconds, the value of currentlocation would be (21, 21). Notice that the named parameters startx and starty are available to the internal declarations, and they would also be available to the in expression.
The concept of time has been covered in a variety of samples already, but there are some additional points that need further explanations. Consider this code fragment:
getimage = let if time > 5 then current = parrotimage; else current = ravenimage in current until predicate (time > 10) => getimage;
The initial returned value of getimage will be the value of the expression current, which will evaluate to ravenimage at first, then after 5 seconds, it will become parrotimage. But after 10 seconds, the predicate event will cause a new expression to be evaluated. In this case the new expression is recursive, and by evaluating it, a new getimage object will be created with a local time of zero. The end result of this declaration is a behavior whose value toggles between two values, keeping each for 5 seconds.
An interesting function called timeTransform can modify the way local time is calculated. It is of type a * number -> a, meaning that it can accept any type as a parameter, together with a number that must itself be time-varying. A copy of the named behavior is returned, with an internal clock tied to the supplied number. For instance, consider what would happen if the declaration in the preceding sample could be used like this:
quickimages = timeTransform (getimage, time * 4);
Now each resulting image value would only be displayed for 1.25 seconds at a time, because time itself would have a quadrupled rate. There are two rules for how time can be manipulated with this functionthe resulting passage of time must always be positive, and it must always be increasing.
A key ingredient of ActiveVRML's flexibility is the ability to have behaviors that react to external events. The examples so far have demonstrated the until function together with a predicate generated event, as in:
bird = parrot until predicate (time > 5) => raven
The identifier bird is defined by the expression parrot until the predicate event is triggered, then it is defined by the expression on the right side of =>, which is raven.
Events can also be generated by user input with a keyboard or with the mouse. For example, the system event leftButtonPress is an event triggered by pressing the left mouse button. Here is how it could be implemented:
bird = parrot until leftButtonPress => raven | predicate (time > 5) => bird;
This sample also shows how to specify an alternate event, using the | operator. Whichever event occurs first is the one whose handler expression is evaluated. The value of bird will always be parrot until the user presses the left mouse button, and then the value will be raven for 5 seconds before switching back to parrot.
It is often desirable to know what the value of a behavior was at the time an event is triggered. This is accomplished through use of the snapshot function, which will sample the value of an expression when the associated event fires. For example, to define a value that you only want updated every 10 seconds, you would write the following:
newsize = time * 1.1234; size (oldsize) = oldsize until snapshot (newsize, predicate (time>10)) => size;
Although newsize is increasing continuously, the value of the size function only reflects the changes every time the predicate event is triggered. The updated value is used as an argument to the recursive redefinition of size, while time is reset to zero.
You can define events that are triggered by an external program, through the COM interface. Typically, this is an event that is tied to a script running in the parent HTML document. This kind of event can be declared with the importUnitEvent function, which requires a unique numeric identifier, for instance:
scriptevent = importUnitEvent (100); bird = parrot until scriptevent => raven;
All external events are presented to ActiveVRML through a single interface, so the only way to identify individual events is with their associated numeric identifying code. For proper event handling to occur, these numbers must match in both the AVRML script and in the external script.
External events may also carry numeric or string data, and they would be declared with importNumberEvent or importStringEvent. Here is an example of how such data is acquired:
birdsize = importNumericEvent (100); birdweight (size) = size * 10; newweight (oldsize) = birdweight (oldsize) until birdsize => birdweight;
This may seem a bit confusing at first, because it is not clear how the imported data is being applied. The key point is that the data is implicitly used as an argument to birdweight after the event has been triggered. Type checking is performed here, as everywhere else, to make sure that the handler function does indeed accept argument of the correct type to match the event.
A more compact way to represent the previous behavior is to use a special form of anonymous function declaration. This method provides a way of defining a function within the body of the event handler, without having to declare it separately. Here is a rewrite of the last sample:
birdsize = importNumericEvent (100); newweight (oldsize) = oldsize * 10 until birdsize => function .x * 10;
ActiveVRML has a rich set of system events that are triggered by user-input through the mouse and keyboard. Several system objects are also available that continuously track the state of keys and buttons.
Mouse button events are: leftButtonDown, rightButtonDown, leftButtonUp, and rightButtonUp. The Boolean state of these buttons can also be tracked with leftButtonState and rightButtonState, which evaluate to TRUE if the button is currently being pressed. The position of the mouse is likewise tracked with mousePosition, which has a type of point2.
Similar events are tied to keypresses, and they react when specific keys change state. They are: keyDown, keyUp, and keyState. An argument is supplied to each of these events when they are declared, indicating the Virtual Key Code of the key to be watched. Some of the more common codes are:
An additional event, charEvent, does not take a Virtual Key Code as an argument, but instead it gives the Boolean state of a supplied ASCII character.
Now it's time to present a more complete sample that will really show off the power of the language. This script will demonstrate the implementation of a simple, yet engaging, game called CyberBall.
Microsoft includes a few eye-catching sample scripts with the ActiveVRML control, and they are well worth looking at to understand the true variety of designs that are possible. However, these scripts focus on manipulating two-dimensional images and sound, and they don't offer an in-depth look at 3D virtual environments. Another topic given little attention is AVRML's ability to interface with external scripts and HTML. CyberBall is written to emphasize these powerful capabilities and to show developers where some pitfalls still remain.
The concept behind CyberBall is hardly new. A player controls a "CyberBall" object as it slides around a rectangular arena. The aim is to use the CyberBall to knock a puck into a goal at one end of the arena, while avoiding the goal at the opposite end. The Arena, the CyberBall, and the puck are all imported VRML objects. An HTML interface provides game controls, score display, and control of dynamic camera positioning. Figure 17.2 shows how it all looks on screen.
Figure 17.2. CyberBall in action.
A look at this source code will reveal a lot of methods that should be familiar to you by now, but after the listing (see Listing 17.5) some of the more interesting parts will be examined in greater detail.
Listing 17.5. Cyberball.avr.
1: // ActiveVRML 1.0 ASCII 2: // CyberBall Sample Script 3: 4: // First we create the arena, starting with the four corners 5: corner = first (import ("cyl.wrl")); 6: cornercolor = colorRgb (0.5, 0.5, 0.5); 7: post = diffuseColor (cornercolor, 8: transformGeometry (scale (0.5, 0.5, 0.5), corner)); 9: corners = transformGeometry (translate (-15,0.20,-20), post) union 10: transformGeometry (translate (-15, 0.20, 0), post) union 11: transformGeometry (translate (-15, 0.20, 20), post) union 12: transformGeometry (translate (15, 0.20, -20), post) union 13: transformGeometry (translate (15, 0.20, 0), post) union 14: transformGeometry (translate (15, 0.20, 20), post); 15: 16: // Now the goals 17: goal1 = diffuseColor (colorRgb (0.5, 0.5, 1.0), 18: transformGeometry (scale (0.3, 0.5, 0.3), corner)); 19: goal2 = diffuseColor (colorRgb (1.0, 0.5, 0.5), 20: transformGeometry (scale (0.3, 0.5, 0.3), corner)); 21: goals = transformGeometry (translate (-3, 0.20, -20), goal1) union 22: transformGeometry (translate (3.0, 0.20, -20.0), goal1) union 23: transformGeometry (translate (-3.0, 0.20, 20.0), goal2) union 24: transformGeometry (translate (3.0, 0.20, 20.0), goal2); 25: 26: // Set up some side rails and end rails 27: rail = first (import ("stick.wrl")); 28: railcolor = colorRgb (0.5, 3.0, 0.5); 29: siderail = diffuseColor (railcolor, transformGeometry ( 30: rotate (yVector3, 1.57079) o scale (1.0, 0.3, 0.2), rail)); 31: sides = transformGeometry (translate(-15, 0, -10), siderail) union 32: transformGeometry (translate (15, 0, -10), siderail) union 33: transformGeometry (translate (-15, 0, 10), siderail) union 34: transformGeometry (translate (15, 0, 10), siderail); 35: endrail = diffuseColor (railcolor, 36: transformGeometry (scale (0.6, 0.3, 0.2), rail)); 37: ends = transformGeometry (translate (-9.0, 0, -20), endrail) union 38: transformGeometry (translate (9.0, 0, -20.0), endrail) union 39: transformGeometry (translate (-9.0, 0, 20.0), endrail) union 40: transformGeometry (translate (9.0, 0, 20.0), endrail); 41: rails = sides union ends; 42: 43: // Create some lights 44: lights = lightColor (colorRgb (0.5, 0.5, 0.5), 45: transformGeometry (rotate (xVector3, -0.1), directionalLight) 46: union 47: transformGeometry (rotate (xVector3, 3.24), directionalLight)); 48: 49: // Put it all together and define a background 50: arena = corners union goals union rails union lights; 51: backgnd = first (import ("backgnd.gif")); 52: 53: // Build a puck 54: puck, pmin, pmax = import ("puck.wrl"); 55: puckradius = xComponent (smax) * 0.05; 56: correctedpuck = transformGeometry (scale(0.05, 0.05, 0.05), puck); 57: coloredpuck = diffuseColor (red, correctedpuck); 58: 59: // Build a player 60: socket, smin, smax = import ("socket.wrl"); 61: playerradius = xComponent (smax) * 0.1; 62: correctedsocket = transformGeometry (rotate (xVector3, 1.57079) o 63: scale(0.1, 0.1, 0.1), socket); 64: coloredsocket = diffuseColor (blue, correctedsocket); 65: ball = first (import ("ball.wrl")); 66: correctedball = transformGeometry (translate (0, 0.6, 0) o 67: scale (1.3, 1.3, 1.3), ball); 68: spinningball = transformGeometry (rotate (yVector3, time * 3), 69: correctedball); 70: coloredball = diffuseColor (red, spinningball); 71: player = coloredsocket union coloredball; 72: 73: // Load some sounds 74: winner = first (import ("winner.wav")); 75: loser = first (import ("loser.wav")); 76: wallhit = first (import ("wallhit.wav")); 77: puckhit = first (import ("puckhit.wav")); 78: 79: // Helper function to restrict value to a specified range 80: clamp (val, min, max) = if (val < min) then min 81: else if (val > max) then max 82: else val; 83: 84: // User input events 85: keyleft = keyState (vkLeft); 86: keyright = keyState (vkRight); 87: keyforward = keyState (vkUp); 88: keyback = keyState (vkDown); 89: 90: // Handle player motion 91: playermover (pos0 : vector3, vel0 : vector3) = 92: let 93: rebound (pos : vector3, vel : vector3, snd) = 94: let 95: // Dampen function slows player down over time 96: dampen (oldvel : vector3) = 97: vector3Xyz (xComponent (oldvel) * 0.5 / time, 98: 0, zComponent (oldvel) * 0.5 / time); 99: 100: // Calculate force applied by user control 101: accelstrength = 5; 102: forwardaccel = 103: if keyforward then 104: vector3Xyz (0, 0, -accelstrength) 105: else if keyback 106: then vector3Xyz (0, 0, accelstrength) 107: else zeroVector3; 108: sideaccel = 109: if keyleft 110: then vector3Xyz (-accelstrength, 0, 0) 111: else if keyright 112: then vector3Xyz (accelstrength, 0, 0) 113: else zeroVector3; 114: pushvel = integral (forwardaccel + sideaccel); 115: 116: // Calculate new unclamped velocity 117: newvel = dampen (vel) + pushvel; 118: magnitude = length (newvel); 119: 120: // Clamp velocity 121: newmagnitude = clamp (magnitude, -4, 4); 122: velocity = normal (newvel) * newmagnitude; 123: 124: // Calculate new position 125: newpos = pos + integral (velocity); 126: xpos = (xComponent (newpos)); 127: zpos = (zComponent (newpos)); 128: 129: // Handle side wall collisions 130: sideextent = 15 - playerradius; 131: sidecollide = predicate ((xpos < -sideextent) or 132: (xpos > sideextent)); 133: sidenewvel = vector3Xyz (xComponent (velocity) * -1, 134: yComponent (velocity), zComponent (velocity)); 135: sidenewpos = vector3Xyz (clamp (xComponent (newpos), 136: -sideextent, sideextent), 0, zComponent (newpos)); 137: 138: // Handle end wall collisions 139: endextent = 20 - playerradius; 140: endcollide = predicate ((zpos < -endextent) or 141: (zpos > endextent)); 142: endnewvel = vector3Xyz (xComponent(velocity), 143: yComponent(velocity), zComponent(velocity) * -1); 144: endnewpos = vector3Xyz (xComponent (newpos), 0, 145: clamp (zComponent(newpos), -endextent, endextent)); 146: 147: in 148: (newpos, velocity, snd) until 149: snapshot ((sidenewpos, sidenewvel, wallhit), 150: sidecollide) => rebound | 151: snapshot ((endnewpos, endnewvel, wallhit), 152: endcollide) => rebound; 153: in 154: rebound (pos0, vel0, silence); 155: 156: 157: // Handle puck motion 158: puckmover (pos0 : vector3, vel0, ppos0, pvel0, score0) = 159: let 160: rebound (pos : vector3, vel, ppos1, pvel1, snd, oldscore) = 161: let 162: // Dampen function slows puck down over time 163: dampen (oldvel : vector3) = 164: vector3Xyz (xComponent (oldvel) * 0.5 / time, 165: 0, zComponent (oldvel) * 0.5 / time); 166: velocity = dampen (vel); 167: 168: // Calculate new position 169: newpos = pos + integral (velocity); 170: xpos = (xComponent (newpos)); 171: zpos = (zComponent (newpos)); 172: 173: // Handle side wall collisions 174: sideextent = 15 - puckradius; 175: sidecollide = predicate ((xpos < -sideextent) or 176: (xpos > sideextent)); 177: sidenewvel = vector3Xyz (xComponent (velocity) * -1, 178: 0, zComponent (velocity)); 179: sidenewpos = vector3Xyz (clamp (xComponent (newpos), 180: -sideextent, sideextent), 0, zComponent (newpos)); 181: 182: // Handle end wall collisions 183: endextent = 20 - puckradius; 184: endcollide = predicate (((zpos < -endextent) or 185: (zpos > endextent)) and ((xpos<-3) or (xpos>3))); 186: endnewvel = vector3Xyz (xComponent(velocity), 187: 0, zComponent(velocity) * -1); 188: endnewpos = vector3Xyz (xComponent (newpos), 0, 189: clamp (zComponent(newpos), -endextent, endextent)); 190: 191: // Goal-keeping functions 192: EVENTSCORE = 200; 193: goalnewvel = zeroVector3; 194: goalnewpos = zeroVector3; 195: goodscore = oldscore + 1; 196: badscore = oldscore - 1; 197: 198: // Handle good goal collisions 199: goal1event = predicate ((zpos < -endextent) and 200: (xpos > -3) and (xpos < 3)); 201: collide1data = snapshot (goodscore, goal1event); 202: goal1collide = exportEvent (collide1data, EVENTSCORE); 203: 204: // Handle bad goal collisions 205: goal2event = predicate ((zpos > endextent) and 206: (xpos > -3) and (xpos < 3)); 207: collide2data = snapshot (badscore, goal2event); 208: goal2collide = exportEvent (collide2data, EVENTSCORE); 209: 210: // Determine player's position, velocity, and sound 211: (ppos, pvel, psound) = playermover (ppos1, pvel1); 212: mixsnd = snd mix psound; 213: 214: // Handle player collisions 215: relativevector = newpos - ppos; 216: playercollide = predicate (length (relativevector) < 217: (playerradius + puckradius)); 218: oldenergy = length (velocity); 219: 220: // Clamp velocity 221: newvel = relativevector * oldenergy + velocity + pvel; 222: magnitude = length (newvel); 223: newmagnitude = clamp (magnitude, -4, 4); 224: phitnewvel = normal (newvel) * newmagnitude; 225: phitnewpos = newpos + normal (phitnewvel) * 226: (playerradius + puckradius); 227: 228: in 229: (newpos, velocity, ppos, pvel, mixsnd, oldscore) until 230: snapshot ((sidenewpos, sidenewvel,ppos, pvel, wallhit, 231: oldscore), sidecollide) => rebound | 232: snapshot ((endnewpos, endnewvel,ppos, pvel, wallhit, 233: oldscore), endcollide) => rebound | 234: snapshot ((goalnewpos,goalnewvel,ppos, pvel, winner, 235: goodscore), goal1collide) => rebound | 236: snapshot ((goalnewpos,goalnewvel,ppos, pvel, loser, 237: badscore), goal2collide) => rebound | 238: snapshot ((phitnewpos,phitnewvel,ppos, pvel, puckhit, 239: oldscore), playercollide) => rebound; 240: in 241: rebound (pos0, vel0, ppos0, pvel0, silence, score0); 242: 243: (puckpos, puckvelocity, playerpos, playervelocity, sounds, score)= 244: puckmover (vector3Xyz (0, 0, -5), vector3Xyz (-2, 0, 0), 245: vector3Xyz (-10, 0, -6), vector3Xyz (-2, 0, 0), 0); 246: 247: // Apply all positional changes to the player and the puck 248: puckmotion = translate (puckpos) o rotate (yVector3, time * 2.9); 249: activepuck = transformGeometry (puckmotion, coloredpuck); 250: playermotion = translate (playerpos) o rotate (yVector3, time); 251: activeplayer = transformGeometry (playermotion, player); 252: 253: // HTML User Input Events 254: EVENTSTART = 100; 255: EVENTSTATIC = 101; 256: EVENTORBITAL = 102; 257: EVENTTRACKING = 103; 258: extstartevent = importUnitEvent (EVENTSTART); 259: extstaticevent = importUnitEvent (EVENTSTATIC); 260: extorbitalevent = importUnitEvent (EVENTORBITAL); 261: exttrackingevent = importUnitEvent (EVENTTRACKING); 262: 263: // Define the linear tracking and the orbital cameras 264: staticxform = rotate (xVector3, -0.25) o scale (1, 1, 0.3) o 265: translate (0, 0.5, 200 + zComponent (playerpos)); 266: orbitxform = rotate (yVector3, time / 4) o 267: rotate (xVector3, -0.25) o scale (1, 1, 0.3) o 268: translate (0, 0.5, 200); 269: 270: // Determine current camera transformation 271: cameraxform (xform) = xform until 272: extstaticevent => cameraxform (staticxform) | 273: extorbitalevent => cameraxform (orbitxform); 274: currentxform = cameraxform (staticxform); 275: camera = transformCamera (currentxform, defaultCamera); 276: 277: // Use camera transformation to position headlight 278: headlight = transformGeometry (currentxform, 279: lightColor (colorRgb (0.5, 0.5, 0.5), directionalLight)); 280: 281: // Compute cropped area 282: topright = point2Xy (xComponent (viewerUpperRight), 283: yComponent (viewerUpperRight)); 284: bottomleft = point2Xy (-xComponent (viewerUpperRight), 285: -yComponent (viewerUpperRight)); 286: 287: // Output everything 288: totalsounds = sounds; 289: totalobjects = arena union activepuck union activeplayer; 290: totalimage = crop (bottomleft, topright, 291: renderedImage (totalobjects union headlight, camera)); 292: model = (totalimage over backgnd, totalsounds, totalsounds) 293: until extstartevent => model;
The first thing done in the script is to create the various elements of the arena, the cyberball, and the puck. All of these pieces use imported VRML objects to form their basic geometries. Then the behaviors of moving objects are defined, together with the sounds they produce. The final required parts define the camera behaviors, light placement, and user input.
Lines 1-50 focus on building the arena. The required components are: six cylindrical corner posts, four cylindrical goal posts, and eight rectangular side rails. The basic process is to import the raw geometry data, apply a material with diffuseColor, use transformGeometry to spatially locate individual elements, and join it all together with the union function. Stationary lights are also created and placed. The final result, in Line 50, is a single object named arena that is the composition of all the previous diverse geometries.
The next step is to build the puck and a cyberball player. When the geometries for these objects are imported, you will notice (in Lines 54 and 60) that extents are also acquired, which define the enclosing volumes of each shape. Later on, collisions will be detected and this information will be necessary to determine when shape boundaries intersect. Transformations are applied to the puck and player to modify their sizes and orientations, but not yet to indicate location. The puck will end up looking like a red aspirin, and the cyberball itself is composed of a large continuously spinning sphere enclosed by a blue stationary torus.
Four keyboard state objects are defined in Lines 84-88. These objects are tied to the arrow keys and are used to control the cyberball's movement behavior.
The behavior of the player is handled by the function playermover, which accepts an initial position and velocity and returns the current position, velocity, and sound. This function is somewhat long (Lines 91-154) and complicated, so it will be described in detail.
At first it may seem confusing to see the multiple use of the let operator, which is used to limit the scope of names. The idea is that there is another, embedded, function called rebound that actually does the work. What does this buy you? When the player collides with an arena wall, several things will happena sound will be played, the velocity vector will be substantially changed, and the position will be constrained to keep the player within the "physical" bounds of the arena area. These actions are all similar in the sense that they are one-shot behaviors; at all other times, the player's behavior is strictly time-varying. What happens after such a collision is that the one-shot behaviors are evaluated and then the results are used to rebuild the entire rebound function with a recursive redefinition, resetting time along the way.
It is necessary to reset time, because the player's position is calculated as the integral of the current velocity, using the integral function (Line 125). The discontinuous change in velocity after a collision would result in an invalid position, so calculations are started fresh each time.
The keys being watched for user input are used to apply acceleration forces to the player's cyberball. This acceleration is constant and results in velocity changes determined by another use of the integral function (Lines 101-114). These forces are easy to implement, because they are only applied in fixed directions along the X and Z axis. Velocity changes are also imposed by a damping function that simulates friction, but this is slightly more complicated to program (see Lines 96-98) because the negative acceleration is always directed along the cyberball's current velocity vector. The final factor in computing the current velocity is to limit the player's speed with the clamp function, which is declared on Line 80 and used on Line 121.
The velocity is used to determine where the next position will be, assuming that nothing gets in the way (like a wall). Two events are defined that will trigger when the player crosses the side or end boundaries. New position and velocity objects are defined that assume a new collision has just taken place, but it is important to realize that these definitions are only referred to when the actual event occurs.
These collision events, sidecollide and endcollide, are watched as part of the behavior described in Lines 148-152. When, for instance, sidecollide is triggered, a snapshot is taken of the behaviors sidenewpos and sidenewvel. These values are then used, together with the sound wallhit, as arguments for the new rebound function.
The behavior of the puck is declared in puckmover (Lines 158-241). This behavior uses the same general approach as playermover, but it has to react with several additional events. Not only must the puck rebound from wall collisions, but it must also detect and respond to collisions with the cyberball, and watch for goals to be scored. What it does not have to do is respond to user input, so the only way to get it moving is to bump it.
When a goal is scored, the puck is placed back at the center of the arena. Depending on which goal the puck was pushed through, the score is incremented or decremented and used as data for triggering an external event using the exportEvent function (Lines 202 and 208). This data becomes associated with a specific event by use of the snapshot function, and it is used by an external script to keep track of the current score.
In Line 211, playermover is actually invoked, and the player's state is determined. Any sounds that the player may be generating are now mixed with the puck's own sounds at Line 212, using the mix function. The location returned by playermover is used to check for any possible collision between the cyberball and the puck.
If a player collision is detected, the velocity of the puck will be modified using the value of the relative vector between the two objects as well as the velocity vector of the cyberball. This produces a more or less realistic rebounding effect.
The positions, velocities, and sounds of both the cyberball and the puck are brought together in Lines 243-245, with the call to puckmover. These values are now applied to the actual geometry objects to position them within the 3D environment.
Two other objects must also be positionedthe camera and a headlight. The camera has two separate behaviors, under the control of an external event (Lines 264-275). By default, it is positioned at one end of the arena and only moves in a linear fashion, back and forth along the Z axis, to maintain a fixed distance from the cyberball. But the camera can also be placed into an orbital mode, where it continuously circles the arena's perimeter while staying focused at the center. A headlight is created (Lines 278-279), which shares the same transformation as the camera, so that there is sufficient light no matter which direction is being observed.
In Lines 282-285, the two-dimensional extents of the display screen are calculated, which will be used to crop the final image. This cropping is not strictly necessary, but the result of rendering geometry objects is an image of infinite extent. By restricting the image to a given rectangular area, some performance gains may be realized.
The final twist is to the model declaration, which provides the expression used to output the entire scene. By reacting to the external event extstartevent, the entire scene can essentially be restarted from the beginning with a simple recursive definition.
The CyberBall sample relies on bidirectional events to manage some areas of its functionality. Those events are specified in the parent HTML filewithin a block of Visual Basic Script that is tied to some visible controls, as seen in Listing 17.6.
There are three named buttons that generate external events for the ActiveVRML script, and a text display whose output value is controlled by an internally generated event that carries associated data. The rest of the document can be recognized as standard HTML.
Listing 17.6. CYBERBALL.HTML.
<HTML> <HEAD><TITLE>CyberBall Sample Page</TITLE></HEAD> <BODY BGCOLOR=WHITE><CENTER> <FONT SIZE=4><B>CyberBall ActiveVRML Sample Page</B></FONT> <BR><BR> <OBJECT ID="AVRCtrl" CLASSID="clsid:{389C2960-3640-11CF-9294-00AA00B8A733}" WIDTH=512 HEIGHT=256> <PARAM NAME="DataPath" VALUE="cyberball.avr"> <PARAM NAME="Expression" VALUE="model"> <PARAM NAME="Border" VALUE=TRUE> </OBJECT><BR><BR> <TABLE BORDER=1 WIDTH=400 CELLPADDING=10> <TR><TH>Game Controls</TH><TH>Camera Controls</TH></TR> <TR><TD VALIGN=CENTER ALIGN=CENTER> <FONT FACE="ARIAL,HELVETICA" SIZE=3><B>Score</B></FONT> <INPUT NAME=Score VALUE="0" SIZE=4,1><BR> <INPUT NAME=Start TYPE=BUTTON VALUE="Start New Game"></TD> <TD VALIGN=CENTER ALIGN=CENTER> <INPUT NAME=Static TYPE=BUTTON VALUE="Static"><BR> <INPUT NAME=Orbital TYPE=BUTTON VALUE="Orbital"> </TD></TR></TABLE><BR> <TABLE WIDTH=500> <TR><TH VALIGN=TOP>1:</TH><TD VALIGN=TOP> <FONT SIZE=3> Use the arrow keys to control your CyberBall. If you don't get a response, try clicking first in the display window. </FONT><BR> </TD></TR> <TR><TH VALIGN=TOP>2:</TH><TD VALIGN=TOP> <FONT SIZE=3> Knock the puck into the Blue goal to score a point, avoid the Red goal or lose a point. </FONT><BR> </TD></TR> </TABLE> <SCRIPT LANGUAGE="VBScript"><!-- sub Start_onClick EVENTSTART = 100 AVRCtrl.FireImportedEvent (EVENTSTART) Score.value = "0" End sub sub Static_onClick EVENTSTATIC = 101 AVRCtrl.FireImportedEvent (EVENTSTATIC) End sub sub Orbital_onClick EVENTORBITAL = 102 AVRCtrl.FireImportedEvent (EVENTORBITAL) End sub sub AVRCtrl_ActiveVRMLEvent (EventID, Param) EVENTSCORE = 200 If EventID = EVENTSCORE Then Score.value = Param End sub --></SCRIPT> </CENTER></BODY> </HTML>
The descriptions and samples that have been presented in this chapter only scratch the surface of the creative possibilities opened up by the ActiveVRML language. The current implementation of Microsoft's AVRML control is amazingly well done for an initial test release, but it will continue to mature along with the language itself. Even in its early form, this control ranks among the most innovative and flexible technologies available for animating a Web site.