Ajax and the Dojo Toolkit
The Dojo Toolkit is a JavaScript-based collection of libraries (hence the name toolkit) that enable you to build Ajax capabilities into your web pages. The Dojo Toolkit is more than just Ajax, but that is our primary focus in this book. In reality, Dojo is a DHTML toolkit. Because Dojo is open source, it is free to use in your projects, and many large organizations contribute to Dojo itself.
JavaScript, DHTML, and Ajax are complicated. Dojo encapsulates complex JavaScript capability with predefined components and APIs. Of course, Dojo has a learning curve; however, the gains obtained by focusing at a higher level of abstraction can be enormous. In addition, while some optimization can be obtained, using Dojo libraries implies heavier pages in the form of JavaScript libraries that need to be transmitted to the user's browser to enable all the cool functionality that you will build. It is worth the tradeoff, but I don't want you to think that you get something for free.
Several Ajax libraries are available on the market today. Many of them are open source. IBM® has opted to support Dojo by contributing to the community and building Dojo into WebSphere® Portal 6.1.
JSON
JSON is designed as a lightweight data-interchange format. Similar to XML but with the advantage of being easier for machines to parse and generate, in many cases, it can be much faster than using XML. JSON is language independent; however, it looks strongly familiar. Similar to pseudocode, JSON is built on the idea of name/value pairs that are often recognized arrays or lists of data. In addition, values are presented as ordered lists that can be recognized as arrays or lists of data.
The main idea is that these name/value pairs and ordered lists of data are universal to all programming languages and are easily translated into the common data structures. The following sample shows how a list of customers might be represented in JSON:
{"customers": {"@uri":"http://myhost/resources/customers", "customer": [ {"@uri":"http://localhost/resources/customers/103", "name":"Atelier graphique", "city":"Nantes", "state":"", "zip":"44000" }, {"@uri":"http://localhost/resources/customers/124", "name":"Mini Gifts Distributors Ltd.", "city":"San Rafael", "state":"CA", "zip":"97562" }, {"@uri":"http://myhost/resources/customers/495", "name":"Diecast Collectables", "city":"Boston", "state":"MA", "zip":"51003" }, ] } }
Note that an object, which is an unordered set of name/value pairs, is represented by the left and right braces {} within the syntax. An array is an ordered set of values that are represented by brackets []. With that in mind, you can see that the preceding sample provides an array of customers. Similarly, the following example is the same data set represented with XML:
<?xml version="1.0" encoding="UTF-8" ?> <customers> <uri>http://myhost/resources/customers </uri> <customer> <uri>http://localhost/resources/customers/103 </uri> <name>Arelier graphique </name> <city>Nantes </city> <state /> <zip>44000 </zip> </customer> <customer> <uri>http://localhost/resources/customers/124 </uri> <name>MiniGifts Distributors Ltd. </name> <city>San Rafael </city> <state>CA </state> <zip>97562 </zip> </customer> <customer> <uri>http://myhost/resources/customers/495 </uri> <name>Diecast Collectables </name> <city>Boston </city> <state>MA </state> <zip>51003 </zip> </customer> </customers>
JSON works particularly well with JavaScript because it is actually a subset of the object literal notation of the JavaScript language. Object literal means that objects are created by literally listing a set of name/value pairs within the document. The debate continues to rage as to whether JSON is really the right approach as compared to XML. For work within the browser, it has some definite advantages, such as quicker parsing and being somewhat more focused on transferring data without worrying about much of the overhead of XML and the structure that it can often involve. The idea behind JSON is to strip down the data to a basic form for transmission to and within the browser.
There is no reason not to use XML, and in many cases, you might have to define XML as a standard in your organization (I'm big on standards) and let JSON be used in the exceptional cases where the application team can explain why JSON is a better fit. Another disadvantage on the JSON side is that it is not as readable as XML. That might be a preference for someone to decide, but the brackets and braces can be confusing, especially if the document is more squished together than what I have laid out here as an example. One big advantage of JSON is that it works well with the Dojo Ajax Toolkit that we use throughout Chapter 3 and other parts of the book.
REST
The term REST was first introduced by Roy Fielding in Chapter 5 of his doctoral dissertation titled Architectural Styles and the Design of Network-Based Software Architectures. REST focuses on the simplicity and efficiency of the Web as it exists today and leverages that simplicity to represent the state of data and resources within the Web. Like the Web, REST is a stateless protocol and is based on common HTTP verbs of GET, POST, PUT, and DELETE. This means that like many of the other technical aspects of the Web 2.0 approach, there are no additional standards required for providing REST services.
REST-provided services, often called RESTful services, are not based on any new standards, but are more of an architectural style that leverages currently available standards to deliver information and data about existing resources. Uniform Resource Identifiers (URIs) are used to determine what resource you are actually operating on. (Remember REST is resource oriented.) Reformatting data into the form of resources does take some initial effort, and with any design, you want to get as close as possible as soon as you can to the correct format. For our purposes, we look at some simple examples. Consider the following URI:
http://myhost.com/resources/customers
In this case, you can expect a list of customers to be returned as an XML stream from the server. Embedded within that data will probably be additional entity URIs so that you can retrieve information about a specific customer. Many of the URIs that would be used within RESTful services contain a combination of the location of the resource and the actual resource that we need. For example, the following URI should return a data stream with information about a specific customer:
http://myhost.com/resources/customer/103
The resulting data stream often takes the form of XML, which is defined to provide an understandable data set; however, other representations can be used such as JSON that we previously discussed.
So, why not use traditional web service technology such as Simple Object Access Protocol (SOAP) and Web Services Description Language (WSDL) to retrieve information? The debates continue about this question, but remember that REST is an architectural style designed to use technologies that are currently available. It can also be useful in simplifying much of the complexity that web services bring to the table. Figure 1.4 illustrates the difference between REST services and traditional web services.
Figure 1.4 REST vs. traditional web services
Traditional web services, on the other hand, are complex because there is a focus on solving complex enterprise problems, which require robust and complex standards to ensure security, performance, and especially interoperability across the Web. Because of this, the standards around traditional web services are more complex and strict because they have to be. The caution here is for you to not think that they are necessarily easily exchanged or that one approach can be a complete replacement for the other.
Atom and RSS
Atom and Really Simple Syndication (RSS), previously known as Rich Site Summary but changed to Really Simple Syndication for version 2.0, might not seem like exciting new technologies. Anyone who has read a blog or news feed should be familiar with the RSS format. These are core technologies in the new Web. The ability to share news and information in a standardized format makes for easy integration in the Web 2.0 world.
Consider the simple case of a set of information around a particular topic. It could be a specific technology, industry, or perhaps political or scientific information. In the Web 1.0 world, readers would have to go to or log in to each website and see whether any information had been updated. As you can imagine, that approach doesn't scale well as users try to add new information sources to their resource pool. Imagine, in my case, where I am lucky to update my blog once a month during busy periods; readers would quickly lose interest in checking for new updates. Data feeds in a standardized format can be accessed and aggregated into a single tool or web page where users can see right away if new information is available. If a new update is available, even from a site where information is not updated as often, it can receive the same attention as the more prolific data sources. RSS is one such format for delivering content. Online news, blogs, products, and catalogs can be syndicated and delivered in a standardized way that enables readers to stay informed of existing and new information without the effort of constantly looking for new data.
Atom is another set of standards for providing the same capability. The Atom Syndication Format is an XML language that is used to provide web feeds similar to RSS. Atom provides the Atom Publishing Protocol (APP) as a way to update or publish to web resources.
Both of these services are readily available on many if not most websites. Usually they are offered together to enable consumers to choose the format that best fits their needs. You have probably seen the icons shown in Figure 1.5 embedded in many of the websites that you visit.
Figure 1.5 RSS and Atom icons
When you click on one of these icons, you will probably see an XML-based representation of the content that is available on the site. Using a feed reader, you can import the URL of either feed to incorporate that data feed into your syndication tool.
Deciding upon a syndication strategy can be important to enabling the sharing of libraries and new content repositories in your organization. This strategy can help you understand how you want to expose information to other systems and end users. Let's face it, no organization has a single source of knowledge. Knowledge management experts have been working for years on trying to collect, organize, and share information and collective knowledge within the enterprise with limited success. The reality is that it is hard to keep up with everything going on in large organizations, and new knowledge sources seem to spring from the ground. Even beyond document repositories, which might be tightly controlled, blogs, industry news, and research, information can come from widely varied sources. Adopting a syndication strategy can help you avoid trying to control the knowledge and simply acknowledge that you can benefit from these new sources.