MPEG-4 in a Nutshell
This section provides a high-level overview of the MPEG-4 standard. Here, we discuss the design goals and principles behind MPEG-4, navigation in MPEG-4, and MPEG-4 as it relates to other multimedia standards.
MPEG-4 Design Goals and Principles
The MPEG-4 project began in 1993 with the initial goal of "very low bit rate audiovisual coding." The initial participants were consumer electronics companies, the computer industry, and telecom operators, as well as academia. Early in the effort it became clear that this goal would only be reached if major changes in the MPEG paradigm were adopted, since no major breakthrough was expected in the compression area.
This change of paradigm was also supported by a major evolution in the way that audiovisual content would be produced, delivered, and consumed in the coming years, as summarized in Table 11. In 1994, the goal of the MPEG-4 standard was therefore changed to "coding of audiovisual objects" to more accurately reflect what was now the focus of the work. In 2001, one can say that the MPEG-4 standard is finalized, even though some additional tools are still under development for advanced functionality. These ongoing extensions are described in More MPEG-4 Jump-Start.
The main design goals of MPEG-4 can be summarized as follows :
To provide a corpus of technology to be used by various types of multimedia services and networks including interactive, broadcast, and conversational models. It was requested that the audiovisual content should flow seamlessly among these different types of services.
To improve the user experience and provide audiovisual content with the same kind of interactivity that can be found on the World Wide Web. This implies client-side as well as client-server interactivity.
To integrate rich media content in a unique framework so that it can be seamlessly manipulated by the content authors as well as by the end users. Such rich media include both natural and synthetic content.
IPMP: MPEG-4 Intellectual Property Management and Protection
How does MPEG-4 tackle the problem of pirated audiovisual content?
An important design goal of the MPEG-4 standard is to allow consumption of the audiovisual content while respecting the usage rights that are attached to it.
MPEG-4 has currently standardized a framework, called the "MPEG-4 hooks," that protects audiovisual content. The hooks allow the identification of the system used to protect the content, the so-called IPMP system. The IPMP system itself is not specified by MPEG.
MPEG-4 is now standardizing an extension of these MPEG-4 hooks, which will provide more interoperability for protected content as well as more flexibility for the protecting system. This extension will take into account the requirement of the end users who do not want to have thousands of devices to consume its content. It will also take into account the requirement of renewability, allowing IPMP systems to be more robust to withstand attacks from pirates.
Finally, it is interesting to note that existing content can be given new life by the MPEG-4 rich media framework, the same way existing movies are enhanced with interactive features and games in the DVD industry.
Following these design goals, the MPEG-4 standard has been developed on a basic and simple principle: the concept of audiovisual scenes made of audiovisual objects composed together according to a scene description. This concept of audiovisual scene allows
Interaction with elements within the audiovisual content, named audio-visual objects.
Adaptation of the coding scheme on a per-audiovisual object basis.
Easy reuse and customization of audiovisual content.
Audiovisual objects can be of a different nature. They can be purely audio objects, such as single- or multichannel audio content, or purely video, such as a traditional rectangular movie or a more exotic, arbitrarily shaped video object. Objects can be natural, like audiovisual data captured from a microphone or from a camera, or synthetic, like text and graphics overlays, animated faces, or synthetic music. They can be 2D like a Web page, or 3D like a spatialized sound or a 3D virtual world.
The scene description provides the spatial and temporal relationship between the audiovisual objects. These relationships can be purely 2D or 3D but can also be a mixture of 2D and 3D scene description. The behavior and interactivity of the audiovisual objects and scenes are also specified in the scene description. In addition, MPEG-4 cites specific protocols to modify and animate the scene description in time, thereby providing incremental build-up, modification, and animation of the audiovisual content.
Also, it is important to note that all this information is provided as compressed binary streams that can be synchronized. A typical audiovisual scene is shown in Figure 12.
FIGURE 12 Various video streams and audio signals are composed on top of a fixed-background still picture according to a scene description. An additional concept is the concept of object descriptor. These tiny structures provide the links between the scene description and the streams of the audiovisual objects.
Navigation in MPEG-4
The MPEG-4 standard specification is a fairly complex set of documents. This book provides content authors with most of the information they need to develop MPEG-4 content, as far as the scene description is concerned. It also provides the architectural elements needed to understand how media streams and other concepts like IPMP fit in the big picture. Still, readers of this book may need occasional direct access to the MPEG-4 specification. This section gives some quick navigation advice with this in mind.
MPEG-4 is a standard in several parts grouped under a specification numbered 14496 by ISO. The main parts are illustrated in Figure 13 and described below.
FIGURE 13 The main parts of the MPEG-4 standard structure.
14496-1 Systems4 and 14496-6 DMIF9: These parts of the standard encompass the interactive scene description as well as the specification of the tools needed for the synchronization of the audiovisual content and its carriage on various networks.
14496-2 Audio5: This part of the standard contains all the representation of audio objects, either natural or synthetic.
14496-3 Visual6: This part of the standard contains all the representation of visual objects, either natural or synthetic.
In addition, MPEG-4 contains
14496-4 Conformance7: This specification describes how compliance to the various parts of the standard can be tested. It contains, in particular, audio, visual, and systems test streams.
14496-5 Reference Software8: This document contains a complete software implementation of the MPEG-4 specification, available for any commercial applications compliant with the standard.
Versions, Amendments, Corrigenda, and Extensions
The technologies considered for standardization in MPEG-4 were not all at the same level of maturity. Therefore, the development of the standard was organized in several phases, called versions. New versions complete the current standardized toolbox with new tools and new functionality. They do not replace the tools of the previous versions. There are currently five versions of MPEG-4 Systems. In ISO language, versions are called amendments.
Sometimes, errors are found in the specification. These errors are gathered in documents named corrigenda that are published as needed. Currently, a corrigendum has been finalized for MPEG-4 Systems. A new one is under development.
Periodically, ISO publishes a new edition of the standard that gathers all amendments and corrigenda done since the last edition. The current edition of MPEG-4 Systems is 14496-1:2001. It contains MPEG-4 Systems Version 1, Version 2, and Corrigendum 1.
Finally, because the numbering of amendments restarts each time a new edition is made, the link between version numbering and amendment numbering was difficult to maintain. MPEG has therefore defined a new name, called extensions, for the successive additions made to the standard. The extension numbering does not wrap around and is therefore easier to follow.
The focus of this book is the scene description and the representation of the audio and visual synthetic objects, as seen from an authoring perspective. This information is spread throughout the first three parts of the standard, and it is, therefore, a bit difficult to find. Generally, all the information related to the structure of the scene description and its animation as well as to graphical objects that do not have streams attached to them can be found in the systems part of the standard. Specific synthetic audio and visual objects that do have a specific streamed representation can be found in the audio and visual parts of the standard.
MPEG-4 and Other Multimedia Standards
Prior to the development of the MPEG-4 standard, other multimedia standards and solutions were in place. MPEG-4 has extensively used and referred to these ancestor technologies. The main ones are
MPEG-211 and H.32314: MPEG-4 used the media representation developed by these standards to construct the MPEG-4 data formats. At the systems level, MPEG-4 is backward compatible with MPEG-1 and MPEG-2 audio and visual streams. MPEG-4 Systems use MPEG-2 transport mechanisms to carry MPEG-4 data. A simple version of MPEG-4 Video is backward compatible with H.263.
VRML9710: MPEG-4 based its scene description on VRML97 and provided additional functionality: the integration of streams, 2D capabilities, integration of 2D and 3D, advanced audio features, timing model, update and animation protocols to modify the scene in time, and compression efficiency with the BIFS (Binary Format for Scenes). Background on this key specification is provided in Chapter 2, "Virtual Reality Modeling Language (VRML) Overview."
QuickTime20: Several file formats for storing, streaming, and authoring multimedia content were available. Among those most used at present are Microsoft ASF, Apple QuickTime, and RealNetworks file format (RNFF). The ASF and QuickTime formats were proposed to MPEG-4 in response to a call for proposals on file format technology. QuickTime was selected as the basis for the MPEG-4 file format (referred to as "MP4").
During the development of the MPEG-4 standard, other communities have developed tools that have been used by the MPEG-4 standard. Among these tools, we can find
Java16 17 technology: MPEG-4 offers a programmatic environment, MPEG-J, that seeks to extend the content creators' ability to incorporate complex controls and data processing mechanisms along with the BIFS scene representations and elementary media data. The MPEG-J environment intends to enhance the end user's ability to interact with the content.
XML12: MPEG-4 offers an XML-based representation of the scene description, called XMT (eXtensible MPEG-4 Textual format). XMT comes in two flavors: a low-level representation that exactly mirrors the BIFS representation and a high-level representation that is closer to the author's intent and that can be mapped on the low-level format. This format is fully compatible with the one currently under development by the Web3D Consortium, X3D18.
In parallel to the development of the MPEG-4 specifications, other standardization bodies and industry consortia have developed tools and applications that address some of the MPEG-4 objectives. Concerning proprietary format, technical issues aside, the mere fact of being closed is a significant disadvantage in the content industry when open alternatives exist. With the separation of content production, delivery, and consumption stages in the multimedia pipeline, the MPEG-4 standard will enable different companies to separately develop authoring tools, servers, or players, thus opening up the market to independent product offerings. This competition will probably allow rapid proliferation of content and tools that will interoperate.
Several technologies can be seen as competitors of MPEG-4. The more relevant ones are
SMIL13: The Synchronized Multimedia Integration Language is an XML- compliant specification for 2D multimedia scene descriptions developed by the W3C SMIL working group.
SVG19: The scalable vector graphics format is an XML-compliant specification also developed by W3C.
As depicted in Figure 14, the XMT format facilitates interoperability with the X3D, SMIL, and SVG specifications. One of the design goals of XMT has been to maximize the overlap with SMIL, SVG, and X3D. Therefore, content authors are now able to compile in MPEG-4 the content they have already produced in these formats, given explicit authoring constraints.
FIGURE 14 One of the design goals of XMT has been to maximize the overlap with SMIL, SVG, and X3D.
Not all SMIL and SVG content is supported by XMT, since some functionality was already defined in MPEG-4. Replicating the tools would have put an extra complexity on the MPEG-4 standard. In addition, MPEG-4 supports features that are not supported by these formats. For example, MPEG-4 is built on a true 2D and 3D scene description, including the event model, as extended from VRML. None of the currently available MPEG-4 competitors reach the sophistication of MPEG-4 in terms of composition capabilities and interactivity features. Furthermore, incorporation of the temporal component in streamed scene descriptions is not a trivial matter. MPEG-4 has successfully addressed this issue, as well as the overall timing and synchronization issues, whereas alternative approaches are lacking in this respect.
Finally, MPEG-4 is the fruit of a multiyear collaborative effort on an interna-tional scale aimed at the definition of an integrated framework. The fragmented nature of the development of competing specifications by different bodies and industries certainly hinders integrated solutions. This may also cause a distorted vision of the integrated, targeted system as well as duplication of functionality. There is no evidence that real integration can be achieved by any alternative frameworks.