Web Services Applications Embrace Heterogeneity

Introduction

The new breed of applications removes human-oriented inefficiencies from communications among computers.

by Enrique Castro-Leon, Intel Corporation

Web services enable computers to interact autonomously through the Web, without the need for a human to interact with a browser.

This capacity brings the advantages of spontaneous interactions and interoperability in machine-to-machine interactions. The transition from human-driven to machine-driven interactions provides insight into the role of the four basic entities associated with Web services: XML, SOAP, UDDI and WSDL.


Fundamental Web Services Protocols

Web services represent an evolutionary development of the original, browser-based Web developed at CERN in the early 90s.

The original Web assumes the existence of a human being operating a browser and a Web server that responds to browser requests over a network. Web services add a number of standards and protocols that allow computers in a network to interact autonomously, driven by programs, in lieu of browser/server interactions.

The loosely coupled, architecture-independent characteristics of the original Web are preserved in Web services. The essential standards that comprise Web services include the following:

  • A universal data format: Extensible Markup Language (XML)
  • A protocol to convey XML: the Simple Object Access Protocol (SOAP)
  • A method for different Web services to find each other: the Universal Discovery, Description, and Integration registry (UDDI)
  • An XML-based language to describe the goodies available from a specific Web service: Web Services Description Language (WSDL)

Within the traditional Web, there are at least two major roadblocks to machine-to-machine interactions.

First, the primary protocol, Hypertext Transfer Protocol (HTTP) mixes formatting instructions with data. If we replace the browser with a listening program, this program will have trouble gleaning the data (the useful information) from the display information (great for humans, useless to machines).

Secondly, no standard formats in the traditional Web exist for application data exchange beyond HTML. An every-day example is the case of a user trying to download monthly transaction data from a bank directly into a consumer-finance application.

While many banks support data download in various formats, they are not necessarily optimal for many consumer applications, and users may or not be able to make use of them without extra work, if at all.

XML provides a universal data format that can be used by all Web services-enabled applications.


Making the Web Safe for Machines

Let us go through mental exercise of enabling two programs to communicate using Web protocols (this exercise is essentially the process that led to Web services).

Unlike a browser, which is at heart a display engine, there are virtually no restrictions on how a client program can be designed under Web services. The client program can be an application that users are already familiar with, such as a spreadsheet program. These programs have been augmented to support Web services protocols.

In effect, this is simply an augmented version of the traditional client/server modality, except that the connection between the client and the server can be Internet-mediated.

These programs will combine the richness of client programs with the interoperability of the Web.


Replacing the Browser

Since we are replacing the browser driven by a human with a computer program, the goal here is to "mimic" the process of browsing, but under programmatic control. The canonical (or minimal) browsing process has four elements:

  • A commonly agreed-upon data format.
  • A transport mechanism to move data between computers.
  • A search mechanism to find Web resources.
  • A means to find out what is in a Web site before actually looking at it.

Let us examine each one of the four elements in turn.

The industry standard format for Web services is XML, whereas the commonly agreed-upon data format for the traditional Web is HTML. HTML mixes formatting commands with data, because it was architected as a display language. XML is more suitable for machine-to-machine interactions.

The transport mechanism to move data between computers under Web Services is SOAP. For simplicity, Web services re-use as much of the existing Web infrastructure as possible, including Web servers. This goal is attained by embedding XML data inside HTML messages, and Web servers keep functioning as they always have. SOAP allows XML data to ride piggyback on HTTP messages.

The search mechanism to find Web resources in the Web services context is UDDI. Web sites need to be located, whether they are accessed by humans browsing or by computers running Web services-enabled client programs. Machines use UDDI similarly to the way humans use search engines.

The means to find out what is in a Web site before actually looking at it is WSDL. Humans running a browser read entries from a search-engine return page and click on interesting links in the description. Programming a machine to perform a similar process is a complex task.

WSDL defines the binding between two machines using remote procedure calls to communicate with Web services. Other communication methods such as publish/subscribe and message passing are also possible.

These four elements are sufficient to allow developers to put together a functioning Web service. Further simplifications are possible in real-life implementations. For instance, if the client and the server already know how to find each other, then there is no need to use UDDI or WSDL.


Web Services Four-Layer Architecture

The picture below summarizes the Web services four-layer architecture and its relationship with the traditional Web.


Figure 1. Web services basic protocols vs. the traditional Web.

There are richer implementations that go beyond the canonical four-layer architecture. For instance, the basic protoco ls do not encompass security, a problem that limited the initial adoption of the technology.

Two protocols have been approved recently. The Security Assertions Markup Language (SAML) and WS-I protocols provide means for authentication and to ensure secure XML transfers, respectively.

Additional protocols are also in the works to support transactional exchanges: a money transfer from one account to another requires multiple messages, and these messages need to be linked together. Functions that are not standardized require a custom implementation, which limits interoperability to those peers sharing the same implementation.


Conclusion

The primary reason behind the industry support of Web services (and why they work at all) is the use of industry-standard protocols. These protocols may not be more efficient or elegant that some non-standard ones, but they have the trump-card advantage of being universally accepted.

The alphabet soup that is rampant among the emerging Web services protocols becomes more coherent once one recognizes the parallels (at least in the basic protocols) between the traditional Web and Web services.

Be sure to check out the other papers in this series:


About the Author

Enrique Castro-Leon's areas of expertise are technology strategy, business strategy, and distributed systems architecture. In addition to contributing to the foundation of Intel® Developer Services, he has written and edited a series of articles on Web services over the past year and has been instrumental in starting the Neighborhood Learning Center, an Oregon non-profit company for computer learning. Enrique holds Ph.D. and M.S. degrees in Electrical Engineering and Computer Science from Purdue University. He can be reached at enrique.g.castro-leon@intel.com.


Additional Resources

Intel, the world's largest chip maker, also provides an array of value-added products and information to software developers:


Для получения подробной информации о возможностях оптимизации компилятора обратитесь к нашему Уведомлению об оптимизации.
Категории: