Architecture and Tools
This section first discusses the MPEG-4 end-to-end architecture and then delves deeper into the anatomy of an MPEG-4 browser.
End-to-End Architecture
The MPEG-4 standard is designed to be used in many environments and terminals. Still, the way MPEG-4 is produced, delivered, and consumed always follows the same walk-through, as depicted in Figure 15. This architecture highlights the main tools of the MPEG-4 standard as well as their position in the end-to-end design.
FIGURE 15 This end-to-end architecture highlights the main tools of the MPEG-4 standard as well as their position in the end-to-end design.
At the beginning of the walk-through are the content authors. They produce audiovisual content with the tools they have available. Part of the content creation process may be live or automated. The content creation process can be separated into two steps: authoring and publishing. Authoring is related to the production of the audiovisual data, including the scene description and interaction. Publishing is related to the adaptation of the content to the constraints put by, for example, the networks on which it will be carried or the terminals at which it will be consumed.
The content is delivered to MPEG-4 servers in XMT format, as described in Chapter 11, "Extensible MPEG-4 Textual Format (XML)," or in MP4 file format, described in More MPEG-4 Jump-Start. The choice between these two formats depends on the freedom the authors want at the next stage of the delivery chain. XMT provides a lot of flexibility in adapting the content to further constraints. In addition, XMT may contain additional information that makes it an appropriate format for exchange between content authors. MP4 is more rigid in that sense but also more deterministic with regard to what the users will see. This may be what the author likes if he wants to somehow ensure that his content is not further manipulated.
MPEG-4 servers use MP4 files to serve the content on various networks. Although MP4 files are the natural interoperability points between the content authors and the MPEG-4 servers, this does not mean that the content will be stored as MP4 files at the server side. Indeed, this is left to the server implementation that may have other ways to represent the content, more optimized for its own software and hardware.
What goes out of the server are streams of data containing MPEG-4 content, called elementary streams. The content of these elementary streams is discussed later in this chapter. What is important at this stage is that MPEG-4 audiovisual scenes can be split into several elementary streams, that these streams can be carried on possibly different networks, and that end terminals receiving these streams from different networks are able to reconstruct the transmitted data in a synchronized manner.
DMIF and Carriage of MPEG-4 Content
One of the design goals of MPEG-4 is to cover a wide range of access conditions so that content can be created once and played on any network. This goal is achieved by abstracting the content delivery layer with an interface named DAI (DMIF Application Interface), as defined in9.
At the MPEG-4 level, the interoperability points are therefore the format of the elementary streams and the compliance to the walk-through defined by the DAI. What happens below the DAI is, in principle, outside the scope of the MPEG-4 standard. Still, in some cases, because MPEG-4 needed specific tools for its transport, like an efficient, low-complexity multiplexing tool (FlexMux) and a dedicated file format (MP4), these specifications have been developed within the MPEG-4 standard.
In order that the MPEG-4 content can actually be transported in existing environments, network-specific transport mechanisms have been defined. Currently the following transport mechanisms are available:
Carriage of MPEG-4 content on the Internet21.
Carriage of MPEG-4 content in MPEG-2 transport streams11.
Storage of MPEG-4 content in MP4 files4.
The spectrum of MPEG-4 end devices goes from standard computers to mobile devices, through interactive television sets. This latest device is a good example to illustrate the various ways MPEG-4 content can be consumed.
The interactive TV set first receives, through the satellite connection, an MPEG-2 transport stream containing the main MPEG-2 digital TV program. When MPEG-4 is carried on MPEG-2 transport streams, some MPEG-4 content related to this TV program arrives at the terminal and provides the user with an enhanced interactive experience. This experience is based on broadcast content, such as a local interaction with a 3D model of a car in an advertisement or the navigation through a multimedia electronic program guide.
Let's assume the TV set is also connected to the Internet with an ADSL link. The broadcast experience of the user is now augmented with client-server functionality as well as with richer media mixed with the TV program. One can imagine a range of services going from program enhancements with video clips streamed from the network on-demand up to multiuser games related to the TV programs with votes, 3D chats, and interaction with the scenario of the broadcast content.
MPEG-4 Browser Architecture and Tools
It's now time to dig deeper into the anatomy of an MPEG-4 browser. The architecture of the browser is fully specified in4, and most of the high-level description below is taken from22, where it is further documented.
The overall architecture of an MPEG-4 terminal is depicted in Figure 16. Starting at the bottom of the figure, we first encounter the particular storage or transmission medium. This refers to the lower layers of the delivery infrastructure (network layer and below, as well as storage). The transport of the MPEG-4 data can occur on a variety of delivery systems, as we have already seen. This includes MPEG-2 transport streams, RTP/UDP over IP, AAL2 on ATM, an MPEG-4 (MP4) file, or a DAB multiplexer.
FIGURE 16 MPEG-4 browser architecture.
Most of the currently available transport layer systems provide a native means for multiplexing information. There are, however, a few instances where this is not the case (e.g., GSM data channels). In addition, the existing multiplexing mechanisms may not fit MPEG-4 needs in terms of low delay, or they may incur substantial overhead in handling the expected large number of streams associated with an MPEG-4 session. As a result, MPEG-4 has defined a multiplexing tool, FlexMux, that can optionally be used on top of the existing transport delivery layer.
The delivery layer provides the MPEG-4 terminal with a number of elementary streams. These streams can contain a variety of information: audiovisual object data, scene description information, control information in the form of object descriptors, as well as meta-information that describes the content or associates intellectual property rights with it. Note that not all of the streams have to be downstream (server to client); in other words, it is possible to define elementary streams for the purpose of conveying data back from the terminal to the transmitter or server, named upstream channels. MPEG-4 standardizes both the mechanisms by which the transmission of such data is triggered at the terminal, as well as its formats as it is transmitted back to the sender.
Regardless of the type of data conveyed in each elementary stream, it is important that streams provide a common mechanism for conveying timing and framing information. The Sync Layer (SL) is defined for this purpose. It is a flexible and configurable packetization facility that allows the inclusion of timing, fragmentation, and continuity information on associated data packets. Such information is attached to data units that comprise complete presentation units, for example, an entire video frame or an audio frame. These frames are called access units.
Elementary streams are sent to their respective decoders that process the data and produce composition units (e.g., a decoded video frame). Control information in the form of object descriptors is used to let the receiver to know what type of information is contained in each stream. These descriptors associate sets of elementary streams to one audio or visual object, define a scene description stream, or even point to an object descriptor stream. These descriptors, in other words, are the way in which a terminal can identify the content being delivered to it. Unless a stream is described in at least one object descriptor, it is impossible for the terminal to make use of it.
Detailed descriptions on how synchronization is handled in MPEG-4 as well as the detailed mechanisms of the object descriptor framework are further described in More MPEG-4 Jump-Start.
Advanced synchronization mechanisms described in More MPEG-4 Jump-Start augment this timing model to permit synchronization of multiple streams and objects that may originate from multiple sources. Flextime allows the definition of simple temporal relationships among MPEG-4 objects, such as "CoStart," "CoEnd," and "Meet," as well as the specification of constraints for the timing relationship between MPEG-4 objects, as if the objects were on stretchable springs.
At least one of the streams must be the scene description information associated with the content. The scene description information defines the spatial and temporal position of the various objects, their dynamic behavior, and any interactivity features made available to the user. As mentioned above, the audiovisual object data is actually carried in its own elementary streams. The scene description contains pointers to object descriptors when it refers to a particular audiovisual object. We should stress that it is possible that an object (in particular, a synthetic object) may be fully described by the scene description. As a result, it may not be possible to uniquely associate an audiovisual object with just one syntactic component of MPEG-4 Systems. As detailed in Chapter 3, "2D/3D Scene Composition," the scene description is tree structured and is heavily based on VRML structure. MPEG-4 provides a binary representation for the scene description. The compression efficiency of this binary representation depends heavily on the quality of the quantification. Chapter 5, "Quantization in BIFS-Updates," is dedicated to this issue.
Major extensions of the VRML scene description have been developed in MPEG-4 for audio composition (see More MPEG-4 Jump-Start).
A key feature of the scene description is that since it is carried in its own elementary stream(s), it can contain full timing information. This implies that the scene can be dynamically updated over time, a feature that provides considerable power for content creators. In fact, the scene description tools provided by MPEG-4 also provide a special lightweight mechanism to modify parts of the scene description in order to effect animation (BIFS-Anim). Animation is accomplished by coding, in a separate stream, only the parameters that need to be updated. This mechanism is fully described in Chapter 4, "BIFS-Updates," and Chapter 6, "Animating Scenes in MPEG-4."
The system's compositor uses the scene description information to aggregate the various natural and synthetic audiovisual object data and to render the final scene for presentation to the user. Synthetic visual objects can be as diverse as 2D meshes (see Chapter 7, "2D Mesh Animation"), 3D meshes (see Chapter 10, "3D Mesh Coding"), facial animation (see Chapter 8, "MPEG-4 Face and Body Animation Tools and Applications"), and body animation (see Chapter 9, "MPEG-4 Human Virtual Body Animation").
In More MPEG-4 Jump-Start, the palette of the MPEG-4 audio tools available is described. It includes in particular speech coding from 2 to 4 kbit/s, scalable generic audio coding from 4 to 64 kbit/s, text-to-speech synthesis and synthetic music coding. An overview of the MPEG-4 video tools presents the MPEG-4 coding algorithms for shaped, scalable, and error-resilient video and provides comparison data with other coding schemes. Still-picture coding, which is of particular importance for the representation of textures in computer graphic artwork, is also developed.
Profiles and Levels
Most applications only need a part of the MPEG-4 tool set. MPEG-4 profiles define subsets useful for a large class of applications and services. MPEG-4 defines several types of profiles that can be combined to specify a complete audiovisual terminal.
The MPEG-4 types of profiles are as follows:
Scene description: These profiles define the features of the scene description and behavior that are supported.
Object descriptor: These profiles define the constraints on the timing model as well as on the IPMP tools.
Audio (natural and synthetic): These profiles define the types of audio objects that are supported.
Visual: These profiles define the types of visual objects that are supported.
Graphics: These profiles define the types of visual synthetic objects that are supported.
Levels limit the number of objects and complexity for a given profile. Compliance with the MPEG-4 standard is claimed for a profile at a certain level.
Although many profiles and levels are already standardized to cover most of the applications, new ones can be easily added when the need arises.
The scene description tools provide mechanisms to capture user or system events. In particular, the tools allow the association of events with user operations on desired objects that canin turnmodify the behavior of the stream. Event processing is the core mechanism with which application functionality and differentiation can be provided. To provide flexibility in this respect, MPEG-4 allows the use of ECMAScript (also known as JavaScript) within the scene description. Use of scripting tools is essential in order to access state information and implement sophisticated interactive applications.
MPEG-4 also defines a set of Java programming language APIs (MPEG-J) through which access to an underlying MPEG-4 engine can be provided to Java applets (called MPEG-lets). This complement of tools can form the basis for very sophisticated applications, opening up completely new ways for audiovisual content creators to augment the use of their content. A complete description of the MPEG-4 application execution engine is provided in More MPEG-4 Jump-Start.
It is important to point out that, in addition to the new functionalities that MPEG-4 makes available to content consumers, it provides tremendous advantages to content creators as well. The use of an object-based structure, where composition is performed at the receiver, considerably simplifies the content creation process. Starting from a set of coded audiovisual objects, it is very easy to define a scene description that combines these objects in a meaningful presentation. A similar approach is used in HTML and Web browsers, thus allowing even inexpert users to easily create their own content. The fact that the content's structure survives the process of coding and distribution also allows for its reuse. For example, content filtering or searching applications can be easily implemented by use of ancillary information carried in object descriptors. Also, users themselves can easily extract individual objects, assuming that the intellectual property information allows them to do so.
Applications
This section illustrates the use of MPEG-4 in some application scenarios. The snapshots were produced by real applications or prototypes. These examples are merely intended to illustrate possible uses of MPEG-4 Systems technologies in the multimedia industries.