HAPPY BOOKSGIVING
Use code BOOKSGIVING during checkout to save 40%-55% on books and eBooks. Shop now.
Register your product to gain access to bonus material or receive a coupon.
Performance is a serious issue. A website must be able to handle the demands of users today, as well as adapt to increasing demands in the future. A website's responsiveness to customers has a direct effect on the success of an organization. Many developers struggle with the complex issues of website performance. This book is a timely guide for enterprise website developers and QA teams. The authors combine their experience analyzing and improving hundreds of websites and show how to conduct an effective performance analysis. They demonstrate how to solve common yet difficult problems, how to monitor tests, and analyze the data collected. Exploring common website types such as brokerage, e-commerce, and B2B, they discuss different performance needs, as well as how to design a performance test for each type of website. They use IBM WebSphere as the example Java application server, yet the content transfers easily to others such as BEA's popular WebLogic.
Java Test Environment Construction and Tuning
Click below for Sample Chapter(s) related to this title:
Sample Chapter
1
Foreword.
Introduction.
Acknowledgments.
1. Basic Performance Lingo.
Measurement Terminology.
Load: Customers Using Your Web Site.
Throughput: Customers Served over Time.
Response Time: Time to Serve the Customer.
Optimization Terminology.
Path Length: The Steps to Service a Request.
Bottleneck: Resource Contention under Load.
Scaling: Adding Resources to Improve Performance.
Summary.
Web Content Types.
Web Application Basics.
The Model-View-Controller Design Pattern.
Servlets.
JavaServer Pages (JSPs).
Assorted Application Server Tuning Tips 56Beyond the Basics.
HTTP Sessions.
Enterprise JavaBeans (EJBs).
Database Connection Pool Management.
Web Services.
Other Features.
Built-in HTTP Servers.
General Object Pool Management.
Multiple Instances: Clones.
Summary.
Network Components.
Routers.
Firewalls.
Proxy Servers.
Network Interface Cards (NICs).
Load Balancers.
Affinity Routing.
HTTP Servers.
Threads or Processes (Listeners).
Timeouts.
Logging.
Keep-Alive.
Operating System Settings.
SSL/HTTPS.
Plug-Ins.
Application Servers.
Security.
Databases and Other Back-End Resources.
Caching.
Web Site Topologies.
Vertical Scaling.
Horizontal Scaling.
Choosing between a Few Big Machines or Many Smaller Machines.
Best Practices.
Summary.
The Java Virtual Machine.
Heap Management.
Garbage Collection.
Java Coding Techniques.
Minimizing Object Creation.
Multi-Threading Issues.
Summary.
Financial Sites.
Caching Potential.
Special Considerations.
Performance Testing Considerations.
B2B (Business-to-Business) Sites.
Caching Potential.
Special Considerations.
Performance Testing a B2B Site.
e-Commerce Sites.
Caching Potential.
Special Considerations.
Performance Testing an e-Commerce Site.
Portal Sites.
Caching Potential.
Special Considerations: Traffic Patterns.
Performance Testing a Portal Site.
Information Sites.
Caching Potential.
Special Considerations: Traffic Patterns.
Performance Testing an Information Site.
Pervasive Client Device Support.
Caching Potential.
Special Considerations.
Performance Testing Sites That Support Pervasive Devices.
Web Services.
Summary.
Test Goals.
Peak Load.
Throughput Estimates.
Response Time Measurements.
Defining the Test Scope.
Building the Test.
Scalability Testing.
Building the Performance Team.
Summary.
Getting Started.
Pet Store Overview.
Determining User Behavior.
A Typical Test Script.
Test Scripts Basics.
Model the Real Users.
Develop Multiple, Short Scripts.
Write Atomic Scripts.
Develop Primitive Scripts.
Making Test Scripts Dynamic.
Support Dynamic Decisions.
Dynamically Created Web Pages.
Dynamic Data Entry.
Provide Sufficient Data.
Building Test Scenarios.
Putting Scripts Together.
Use Weighted Testing.
Exercise the Whole Web Site.
Common Pitfalls.
Inaccuracies.
Hard-Coded Cookies.
Unsuitable Think Times.
No Parameterization.
Idealized Users.
Oversimplified Scripts.
Myopic Scripts.
Summary.
Production Simulation Requirements.
Users.
Scripts.
Automation and Centralized Control.
Pricing and Licensing.
Tool Requirements for Reproducible Results.
Reporting.
Verification of Results.
Real-Time Server Machine Test Monitoring.
Buy versus Build.
Summary.
The Network.
Network Isolation.
Network Capacity.
e-Commerce Network Capacity Planning Example.
Network Components.
Network Protocol Analyzers and Network Monitoring.
The Servers.
Application Server Machines.
Database Servers.
Legacy Servers.
The Load Generators.
Master/Slave Configurations.
After the Performance Test.
Hardware and Test Planning.
Summary.
Case Study Assumptions.
Fictional Customer: TriMont Mountain Outfitters.
An Introduction to the TriMont Web Site.
Site Requirements.
Initial Assessment.
Next Steps.
Detailed TriMont Web Site Planning Estimates.
Calculating Throughput (Page Rate and Request Rate).
Network Analysis.
HTTP Session Pressure.
Test Scenarios.
Moving Ahead.
Summary.
Testing Overview.
Test Analysis and Tuning Process.
Test and Measure.
Validate.
Analyze.
Tune.
Test Phases.
Phase 1: Simple, Single-User Paths.
Phase 2: User Ramp-Up.
Test Environment Configurations.
Start Simple.
Add Complexity.
Summary.
CPU Utilization.
Monitoring CPU on UNIX Systems.
Monitoring CPU on Windows Systems.
Monitoring CPU with a Test Tool.
Java Monitoring.
Verbose Garbage Collection.
Thread Trace.
Other Performance Monitors.
Network Monitoring.
Software Logs.
Java Application Server Monitors.
Summary.
Underutilization.
Insufficient Network Capacity.
Application Serialization.
Insufficient Resources.
Insufficient Test Client Resource.
Scalability Problem.
Bursty Utilization.
Application Synchronization.
Client Synchronization.
Back-End Systems.
Garbage Collection.
Timeout Issues.
Network Issues.
High CPU Utilization
High User CPU.
High System CPU.
High Wait CPU.
Uneven Cluster Loading.
Network Issues.
Routing Issues.
Summary.
Update.
Test Environment.
Hardware Configuration.
Input Data.
Calculating Hardware Requirement Estimate (Pre-Test).
HTTP Session Pressure.
Testing Underway.
Burstiness.
Underutilization.
Next Steps.
Summary.
Review Plan Requirements.
Review Load, Throughput, and Response Time Objectives.
Incorporate Headroom.
Review Performance Test Results.
Single-Server User Ramp-Up.
Scalability Data.
Processor Utilization.
Projecting Performance.
Projecting Application Server Requirements.
Projecting Hardware Capacity.
Scaling Assumptions.
Case Study: Capacity Planning.
Review Plan Requirements.
Review Performance Test Results.
Project Capacity.
Ongoing Capacity Planning.
Collecting Production Data.
Analyzing Production Data.
Summary.
Capacity Sizing Worksheet.
Input Data.
Calculating Peak Load (Concurrent Users).
Calculating Throughput (Page Rate and Request Rate).
Network Capacity Sizing Worksheet.
Input Data.
Calculating Network Requirements.
Network Sizing.
JVM Memory HTTP Session Sizing Worksheet.
Input Data.
Calculating HTTP Session Memory Requirement.
Hardware Sizing Worksheet.
Input Data.
Calculating Hardware Requirement Estimate (Pre-Test).
Capacity Planning Worksheet.
Part 1: Requirements Summary.
Part 2: Performance Results Summary.
Part 3: Capacity Planning Estimates.
Web Application Checklist.
Servlets.
Java Server Pages.
JavaBeans.
XML/XSL.
Static Content.
Logging.
HTTP Session.
Enterprise JavaBeans.
Web Services.
Database Connection.
Object Pools.
Garbage Collection.
Component Checklist.
Routers.
Firewalls.
Proxy Servers.
Network Interface Cards.
Operating System.
HTTP Servers.
Web Container.
Thread Pools.
Enterprise JavaBean Container.
JVM Heap.
Application Server Clones.
Database Server.
Legacy Systems.
Test Team Checklist.
Test Team.
Support Team.
Web Application Developers.
Leadership and Management Team.
Test Environment Checklist.
Controlled Environment.
Network.
Hardware.
Prerequisite Software.
Application Code.
Back-End.
Test Simulation and Tooling Checklist.
Performance Test Tool Resources.
Test Scripts and Scenarios.
Tools.
Performance Analysis and Test Tool Sources.
@BHEADS = Java Profilers.
Performance Test Tools.
Java Application Performance Monitoring.
Database Performance Analysis.
Network Protocol Analyzers.
Product Capabilities.
Production Monitoring Solutions.
Load Driver Checklist.
Sample LoadRunner Script.
LoadRunner Initialization Section.
LoadRunner Action1 Section.
LoadRunner End Section.
Sample SilkPerformer Script.
Sign-in, Browse, and Purchase Script.
Search Script.
New Account Script.
Performance Test Results Worksheet.
Results Verification Checklist.
Tuning Settings Worksheet.
Hardware.
Operating System.
HTTP Server.
Application Server.
JVM.
Application Parameters.
Database Server.
Bottleneck Removal Checklist.
Underutilization.
Bursty Utilization.
High CPU Utilization.
Uneven Cluster Loading.
Summary Test Results Graph.
Does your website have enough capacity to handle its busiest days? Will you lose potential customers because your web application is too slow? Are you concerned about your e-business becoming the next cautionary tale highlighted on the evening news?
The authors of this book combine their experiences with hundreds of public and private websites worldwide to help you conduct an effective performance analysis of your website. Learn from the experts how to design performance tests tailored to your website's content and customer usage patterns.
In addition to designing better tests, the book provides helpful advice for monitoring tests and analyzing the data collected. Are you adding load, but not seeing increased throughput? Do some machines in your environment work much harder than the others? Use the common symptom reference to isolate bottlenecks and improve performance.
Since many sites use a Java application server to power their web applications, the authors discuss the special considerations (garbage collecting, threading, heap management, to name a few) unique to the Java environment. Also, the book covers the special performance requirements of sites supporting handheld devices, as well as sites using Enterprise Java Beans (EJBs).
Designed to benefit those with a little or a lot of performance testing background, this book helps you get the most from your performance analysis investment. Learn how to determine the best your site will do under the worst of conditions.
About a year ago I was sent out to a large Fortune 500 WebSphere customer to solve a critical "WebSphere performance" problem. The customer was close to putting a WebSphere application into production, and believed they had discovered-with less than a week to go before the application was to go into production-that WebSphere "did not perform well."
No one seemed to have many details about the problem, but we were assured by the highest levels of management at both the customer's company and IBM that this was indeed a critical situation. So I dropped everything and headed out the next morning on a 6:30am flight. At the company I met with the customer representative, who showed me an impressive graph (the output of a popular load-testing tool) that demonstrated that their application reached a performance plateau at five simultaneous users, and that response times increased dramatically as more load was placed on the system.
I asked if they could run the test while I watched so that I could see the numbers myself. I was told no-the hardware they were using for performance testing was also being used for user-acceptance testing. It would not be available until after 4pm that day. So I asked if I could see the test scripts themselves. to see how they were testing the application. Again the answer was no. The fellow who wrote the scripts wouldn't return until 5pm, and no one else knew where he kept them.
Not wanting to seem like I was wasting time, I next asked for the source code for the application. They were able to provide it, and I spent the next eight hours reading through it and making notes about possible bottlenecks. When the script author returned at 5pm, we reconfigured the test machine and ran the script. Sure enough, the performance curve looked like what the test had caught the previous night. I asked him to walk me through the code of the test script. He showed me what each test did, and how the results were captured. I then asked him about one particular line of code in the middle of the script: "So, here you seem to be hard-coding a particular user ID and password into the test. You neverm vary it, regardless of the number of simultaneous users the load testing tool simulates?"
He said that this was true and asked if that could be a problem. I explained to him that their test setup used a third-party security library, and that one of the "features" of this library was that it restricted users with the same user ID and password from logging in twice. In fact, it "held" requests for the second login until the first user using that login has logged out. I had picked up on this fact by reading the code that morning. I then asked if he could rewrite the script to use more than one login ID. In fact, if they wanted to test up to a hundred simultaneous logins, could he rewrite the script so that it used a hundred different login IDs? He ended up doing just that, and the next night, after a few more such adventures, we reran the modified test.
This time WebSphere performed like a champ. There was no performance bottleneck, and the performance curve that we now saw looked more like what I had expected in the first place. There were still some minor delays, but the response times were much more in line with other, untuned customer applications I had seen.
So what was wrong here? Why did this company have to spend an enormous amount of money on an expensive IBM consultant just to point out that their tests weren't measuring what they thought they measured? And why were we working under such stressful, difficult circumstances, at the last possible moment, with a vendor relationship on the line?
What it came down to was a matter of process. Our customer did not have a proper process in place for performance testing. They did not know how to go about discovering performance problems so that they could be eliminated. The value that this company placed on performance testing was demonstrated by the fact that the performance tests were scheduled for after hours, and were done on borrowed hardware. Also, the fact that this problem was not discovered until less than a week before the planned deployment date of the application showed the priority that performance testing had among other development activities; it was an "afterthought," not a critical, ongoing part of development.
I have repeatedly seen large, expensive systems fail-and thousands or millions of dollars lost-because of this attitude. As a wise man once said "failing to plan is planning to fail." The book you hold in your hand can help you to avoid such failures. It offers concise, easy to follow explanations of the different kinds of performance problems that large-scale web applications face. More important, it provides you with a process and methodology for testing your systems in order to detect and fix such problems before they become project-killers.
The authors of this book are all respected IBM consultants and developers, with years of collective experience in helping solve customer problems. They've dealt with the foibles of application servers, customer application code, network configuration issues, and a myriad of other performance-stealing problems. They convey their experiences and recommendations in a laid-back, easy to understand way that doesn't require that you to have a Ph.D. in stochastic modeling to understand. I believe their greatest contribution to the world is a process for injecting performance testing into all stages of the development process-making it, appropriately, a key part of web site development.
If you are building a large web site using J2EE technologies-or even just a small, departmental application-buy this book. Performance problems can creep in to all sizes of applications, and the time that you will save by following the advice given here will easily repay the purchase price of this book many times over. I've come to rely on the authors for advice in this area, and I'm sure you will too.
-Kyle Brown
Senior Technical Staff Member
IBM Software Services for WebSphere
Click below to download the Index file related to this title:
Index