Why Optimization Matters

by John Sharp
Content Master Ltd


Abstract

Optimization is the act of design and developing systems to take maximum advantage of the resources available. For example, applications can be optimized to benefit from the large quantity of memory available on a particular computer, or the speed of the specific I/O hardware available, or particular features of the processor being used. Even in development environments such as the Java* language with Java 2 Enterprise Edition* and the Microsoft .NET* Framework, where much of the low-level details and hardware-specific code have been abstracted away to a runtime engine, there is much that a designer and developer should do to ensure that applications execute in an optimal manner. An application that does not perform well will not be a success, so the benefits of optimization are as much commercial as they are a simple convenience for end users.


Scope

The purpose of this paper is to describe the benefits of designing optimized systems based on Intel® hardware, and summarizes some of the risks of not doing so.


Why Optimize?

As has already been mentioned, the reasons for optimizing an application are essentially commercial; very few consumers will purchase a system that has a reputation for poor performance, especially if a more viable alternative is available for a similar price. In the world of personalized development, performance may also be a contractual or even a legal requirement (through Service Level Agreements). For example, the systems used by many clearing banks must be capable of performing large numbers of transactions in a short period of time, often dictated by regulatory authorities such as the Federal Reserve in the United States. Failure to complete processing within a given timeframe can result in a bank incurring interest charges on the funds not transferred (in many cases, the transactions can total well over a billion dollars, so even one night's interest payments would be considerable), a fine being imposed, or even the loss of a trading license if performance is consistently poor.


What is Optimization?

The definition of optimization according to Dictionary.com* is "The procedure or procedures used to make a system or design as effective or functional as possible-" Optimization is thus a subjective quantity, and how well a system is optimized depends upon the expectations and point of view of the user.

An apocryphal story tells of a software developer who deliberately put delay loops and other time-wasting pieces of code in the programs he developed. On delivery, the customer would test the system and agree that the functionality was fine but the performance was poor. The developer would agree to spend some time attempting to speed up the system, but it would take a couple of weeks and cost additional money. Having gained the consent of the customer, the developer would then remove some of the loops from the code and deliver the improved version to the customer. Invariably the customer would agree that performance had imp roved, but could the developer just squeeze a bit more out of the system? The developer would then go away and remove the remaining code-delays and deliver the final version of the system to the customer, who felt happy that the system was now fine. The developer had no qualms about using this strategy as he argued that if he had delivered the original system without any delays in the first place, the customer would still have asked for it to be sped up!

It is important that to quantify performance so that systems can be optimized within realistic expectations. It is also important to state these performance measures up front, possibly as part of the customer acceptance criteria, so the system can be designed and implemented accordingly.


Quantifying Optimization

How well a system is optimized is often measured by the quality of service it delivers. The quality of service is more objective than simple user expectations, and it mandates performance measures in terms of response times, throughput, and so on. The quality of service will also govern aspects such as the reliability and availability of a system, and these features can also have a bearing on the optimizations applied. Therefore, an important question to ask when designing a system is "what are the parameters that define acceptable performance?"

Responsiveness

To end users sitting in front of a desktop computer, performance is usually equated with response time. An application that fetches 1000 rows of data from a database before displaying the first row is likely to be perceived by the user as slower than an application that fetches the first 10 rows and displays them while retrieving the remainder either as a lower priority background task, or on demand as and when the user "pages" to that data. In situations such as this, the rich client is king. The desktop computer will require software capable of displaying data while performing processing and issuing requests to retrieve more data at the same time. A desktop system running the Intel® Pentium® III or Pentium 4 processor, or a workstation running the Intel Xeon® processor that can perform multithreading at the hardware level, is perfect for this type of working.

Throughput

To data-processing departments, performance means throughput, often measured in terms of the number of transactions processed per second. In this scenario raw processing power coupled with devices that can provide fast data storage is essential. Requests received from potentially thousands of clients must be processed and responded to quickly. The Intel Itanium® processor and the Intel Xeon processor are designed for just such an environment, especially when used in conjunction with an integrated Ethernet* controller supplying fast access to the network, and RAID* controllers for managing disk access.

Reliability and Availability

These are frequently overlooked aspects of high performance systems, although as requirements they are often critical. (A clearing bank, for example, cannot afford for its systems to go down, even for a few minutes, without expec ting to incur some financial penalty.) A system that is not functioning has zero performance. An optimized system must therefore also be extremely reliable. This means redundancy in one form or other. Multiple processors, clusters, RAID, the ability to hot-swap devices, power supplies, and disks are essential. Hardware architecture such as the boxed Intel Server Platform is a must to support an "always on" system.


When Should Optimization Occur?

It should be clear that performance and optimization-like any form of quality-is not something that can be "bolted on" to a system as an extra once it has been created. Optimization is an on-going, non-functional requirement that affects all stages in the development of a system, from analysis and design through development and implementation. The analysis phase will identify the areas that need to be addressed by optimization, the design phase will indicate how major optimization should be achieved, and the development phase will apply these optimizations. During the implementation phase while the system is deployed on the appropriate hardware resources, further areas of optimization will be identified by performance monitoring during application testing.

Once a system is up and running, optimization does not finish. The system should be monitored for any signs of instability or bottlenecks, and corrective action taken before a potential failure occurs. As the number of client computers accessing a server scales up, optimization may involve adding further hardware (memory, processors, or even entire servers in a clustering environment), and balancing data access across additional disk controllers.


The Risks of Poor Optimization

Optimization can be a double-edged sword. For example, while an application designed to exploit memory may perform well in its intended environment, it may operate in a less than optimal manner when deployed on machinery that does not match the expected memory requirements. Hardware-specific optimization in program code may also affect the portability of a system. At the programming level, optimization has a reputation for producing complex software based on tortuous logic making upgrades and future maintenance more difficult. This is not an inevitable outcome, and in many cases it is due to the developer's poor practice and documentation. The message is clear: as with any piece of software, accurate documentation is critical if the system is to be maintained effectively.


How Should a System be Optimized?

In the "good old days" developers expended a lot of effort formulating the most efficient mechanisms to perform specific tasks on expensive hardware. The software engineering departments of many universities invariably had teams of researchers performing mathematical proofs on the latest algorithms, identifying the speed with which they would operate. (These were the days of Tony Hoare and Donald Knuth, famed for their works on the ways of searching and sorting data.) This was, and still is, valuable work, applied by many pieces of commercial software. However, the performance of modern hardware far outweighs that which was available in even the recent past (Moore's Law still applies), and the costs also continue to drop.

These days, optimization is much more likely to involve identifying the appropriate hardware to use as a platform, and then tuning the system for that hardware. It is far more cost-effective to add another 512 MB of memory to a computer running an application server than to pay for a consultant or developer to rewrite part of the system. With the commoditization of software (how many sites write their own database server software these days rather than using a commercial package?), hardware selection is critical since the customer often does not have access to the underlying source code of the system.

That said, fast hardware is not an excuse for poor design and coding practice; if an application halts while waiting for user input, or while data is locked in a database, it does not matter how fast the processor is. Software tools such as the Intel VTune™ Performance Analyzer should be used to identify, isolate, and rectify bottlenecks occurring with in-house code, and software vendors often supply their own tools for monitoring and maintaining the performance of their systems.


Conclusion

Optimization is a key requirement of all commercial systems. Failure to optimize can at best mean user-frustration, and at worst the loss of an entire business.

Vendors such as Intel offer powerful hardware eminently suitable for hosting the range of commercial services now available, and scalable enough to cope with the demands of the future. Using a distributed architecture based on reliable networking technology capable of supporting the required bandwidth, multithreading processors, I/O devices capable of storing and retrieving data quickly, and fast memory is the ideal platform for today's systems. Data can be stored and secured easily on powerful server computers and made accessible to multiple client machines. Client computers can provide the horsepower needed to process and display data in a user-friendly manner, and if designed carefully, the distributed manner of such a system can reduce the chances of a failure at a single point bringing the entire system down. Further reliability can be provided through redundancy, and processor architectures such as those provided by Intel are well-suited to clustering.


Related Articles

 


References

 


About the Author

The author of this white paper works for Content Master Ltd., a technical authoring company in the United Kingdom specializing in the production of training and educational materials. For more information on Content Master, please visit its Web site at www.contentmaster.com*.

John Sharp is a Principal Technologist at Content Master Ltd. There he researches and develops technical content for technical training courses, seminars, and white papers. Throughout his development career, John has been active in training, developing, and delivering courses. He has conducted courses on subjects ranging from UNIX Systems Programming, to SQL Server Administration, to Enterprise Java Development. He has used his experience to create a broad range of training materials covering many subjects. John is deeply involved with .NET development, writing courses, building tutorials, and delivering conference presentations covering Visual C# .NET development and ASP.NET. John has also authored books on C# and J#. He lives in Tetbury, Gloucestershire in the United Kingdom.


For more complete information about compiler optimizations, see our Optimization Notice.
Categories: