- SQL Server Reference Guide
- Introduction
- SQL Server Reference Guide Overview
- Table of Contents
- Microsoft SQL Server Defined
- SQL Server Editions
- SQL Server Access
- Informit Articles and Sample Chapters
- Online Resources
- Microsoft SQL Server Features
- SQL Server Books Online
- Clustering Services
- Data Transformation Services (DTS) Overview
- Replication Services
- Database Mirroring
- Natural Language Processing (NLP)
- Analysis Services
- Microsot SQL Server Reporting Services
- XML Overview
- Notification Services for the DBA
- Full-Text Search
- SQL Server 2005 - Service Broker
- Using SQL Server as a Web Service
- SQL Server Encryption Options Overview
- SQL Server 2008 Overview
- SQL Server 2008 R2 Overview
- SQL Azure
- The Utility Control Point and Data Application Component, Part 1
- The Utility Control Point and Data Application Component, Part 2
- Microsoft SQL Server Administration
- The DBA Survival Guide: The 10 Minute SQL Server Overview
- Preparing (or Tuning) a Windows System for SQL Server, Part 1
- Preparing (or Tuning) a Windows System for SQL Server, Part 2
- Installing SQL Server
- Upgrading SQL Server
- SQL Server 2000 Management Tools
- SQL Server 2005 Management Tools
- SQL Server 2008 Management Tools
- SQL Azure Tools
- Automating Tasks with SQL Server Agent
- Run Operating System Commands in SQL Agent using PowerShell
- Automating Tasks Without SQL Server Agent
- Storage – SQL Server I/O
- Service Packs, Hotfixes and Cumulative Upgrades
- Tracking SQL Server Information with Error and Event Logs
- Change Management
- SQL Server Metadata, Part One
- SQL Server Meta-Data, Part Two
- Monitoring - SQL Server 2005 Dynamic Views and Functions
- Monitoring - Performance Monitor
- Unattended Performance Monitoring for SQL Server
- Monitoring - User-Defined Performance Counters
- Monitoring: SQL Server Activity Monitor
- SQL Server Instances
- DBCC Commands
- SQL Server and Mail
- Database Maintenance Checklist
- The Maintenance Wizard: SQL Server 2000 and Earlier
- The Maintenance Wizard: SQL Server 2005 (SP2) and Later
- The Web Assistant Wizard
- Creating Web Pages from SQL Server
- SQL Server Security
- Securing the SQL Server Platform, Part 1
- Securing the SQL Server Platform, Part 2
- SQL Server Security: Users and other Principals
- SQL Server Security – Roles
- SQL Server Security: Objects (Securables)
- Security: Using the Command Line
- SQL Server Security - Encrypting Connections
- SQL Server Security: Encrypting Data
- SQL Server Security Audit
- High Availability - SQL Server Clustering
- SQL Server Configuration, Part 1
- SQL Server Configuration, Part 2
- Database Configuration Options
- 32- vs 64-bit Computing for SQL Server
- SQL Server and Memory
- Performance Tuning: Introduction to Indexes
- Statistical Indexes
- Backup and Recovery
- Backup and Recovery Examples, Part One
- Backup and Recovery Examples, Part Two: Transferring Databases to Another System (Even Without Backups)
- SQL Profiler - Reverse Engineering An Application
- SQL Trace
- SQL Server Alerts
- Files and Filegroups
- Partitioning
- Full-Text Indexes
- Read-Only Data
- SQL Server Locks
- Monitoring Locking and Deadlocking
- Controlling Locks in SQL Server
- SQL Server Policy-Based Management, Part One
- SQL Server Policy-Based Management, Part Two
- SQL Server Policy-Based Management, Part Three
- Microsoft SQL Server Programming
- An Outline for Development
- Database
- Database Services
- Database Objects: Databases
- Database Objects: Tables
- Database Objects: Table Relationships
- Database Objects: Keys
- Database Objects: Constraints
- Database Objects: Data Types
- Database Objects: Views
- Database Objects: Stored Procedures
- Database Objects: Indexes
- Database Objects: User Defined Functions
- Database Objects: Triggers
- Database Design: Requirements, Entities, and Attributes
- Business Process Model Notation (BPMN) and the Data Professional
- Business Questions for Database Design, Part One
- Business Questions for Database Design, Part Two
- Database Design: Finalizing Requirements and Defining Relationships
- Database Design: Creating an Entity Relationship Diagram
- Database Design: The Logical ERD
- Database Design: Adjusting The Model
- Database Design: Normalizing the Model
- Creating The Physical Model
- Database Design: Changing Attributes to Columns
- Database Design: Creating The Physical Database
- Database Design Example: Curriculum Vitae
- NULLs
- The SQL Server Sample Databases
- The SQL Server Sample Databases: pubs
- The SQL Server Sample Databases: NorthWind
- The SQL Server Sample Databases: AdventureWorks
- The SQL Server Sample Databases: Adventureworks Derivatives
- UniversalDB: The Demo and Testing Database, Part 1
- UniversalDB: The Demo and Testing Database, Part 2
- UniversalDB: The Demo and Testing Database, Part 3
- UniversalDB: The Demo and Testing Database, Part 4
- Getting Started with Transact-SQL
- Transact-SQL: Data Definition Language (DDL) Basics
- Transact-SQL: Limiting Results
- Transact-SQL: More Operators
- Transact-SQL: Ordering and Aggregating Data
- Transact-SQL: Subqueries
- Transact-SQL: Joins
- Transact-SQL: Complex Joins - Building a View with Multiple JOINs
- Transact-SQL: Inserts, Updates, and Deletes
- An Introduction to the CLR in SQL Server 2005
- Design Elements Part 1: Programming Flow Overview, Code Format and Commenting your Code
- Design Elements Part 2: Controlling SQL's Scope
- Design Elements Part 3: Error Handling
- Design Elements Part 4: Variables
- Design Elements Part 5: Where Does The Code Live?
- Design Elements Part 6: Math Operators and Functions
- Design Elements Part 7: Statistical Functions
- Design Elements Part 8: Summarization Statistical Algorithms
- Design Elements Part 9:Representing Data with Statistical Algorithms
- Design Elements Part 10: Interpreting the Data—Regression
- Design Elements Part 11: String Manipulation
- Design Elements Part 12: Loops
- Design Elements Part 13: Recursion
- Design Elements Part 14: Arrays
- Design Elements Part 15: Event-Driven Programming Vs. Scheduled Processes
- Design Elements Part 16: Event-Driven Programming
- Design Elements Part 17: Program Flow
- Forming Queries Part 1: Design
- Forming Queries Part 2: Query Basics
- Forming Queries Part 3: Query Optimization
- Forming Queries Part 4: SET Options
- Forming Queries Part 5: Table Optimization Hints
- Using SQL Server Templates
- Transact-SQL Unit Testing
- Index Tuning Wizard
- Unicode and SQL Server
- SQL Server Development Tools
- The SQL Server Transact-SQL Debugger
- The Transact-SQL Debugger, Part 2
- Basic Troubleshooting for Transact-SQL Code
- An Introduction to Spatial Data in SQL Server 2008
- Performance Tuning
- Performance Tuning SQL Server: Tools and Processes
- Performance Tuning SQL Server: Tools Overview
- Creating a Performance Tuning Audit - Defining Components
- Creating a Performance Tuning Audit - Evaluation Part One
- Creating a Performance Tuning Audit - Evaluation Part Two
- Creating a Performance Tuning Audit - Interpretation
- Creating a Performance Tuning Audit - Developing an Action Plan
- Understanding SQL Server Query Plans
- Performance Tuning: Implementing Indexes
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 1
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 2
- Performance Monitoring Tools: Windows System Monitor
- Performance Monitoring Tools: Logging with System Monitor
- Performance Monitoring Tools: User Defined Counters
- General Transact-SQL (T-SQL) Performance Tuning, Part 1
- General Transact-SQL (T-SQL) Performance Tuning, Part 2
- General Transact-SQL (T-SQL) Performance Tuning, Part 3
- Performance Monitoring Tools: An Introduction to SQL Profiler
- Performance Tuning: Introduction to Indexes
- Performance Monitoring Tools: SQL Server 2000 Index Tuning Wizard
- Performance Monitoring Tools: SQL Server 2005 Database Tuning Advisor
- Performance Monitoring Tools: SQL Server Management Studio Reports
- Performance Monitoring Tools: SQL Server 2008 Activity Monitor
- The SQL Server 2008 Management Data Warehouse and Data Collector
- Performance Monitoring Tools: Evaluating Wait States with PowerShell and Excel
- Practical Applications
- Choosing the Back End
- The DBA's Toolbox, Part 1
- The DBA's Toolbox, Part 2
- Scripting Solutions for SQL Server
- Building a SQL Server Lab
- Using Graphics Files with SQL Server
- Enterprise Resource Planning
- Customer Relationship Management (CRM)
- Building a Reporting Data Server
- Building a Database Documenter, Part 1
- Building a Database Documenter, Part 2
- Data Management Objects
- Data Management Objects: The Server Object
- Data Management Objects: Server Object Methods
- Data Management Objects: Collections and the Database Object
- Data Management Objects: Database Information
- Data Management Objects: Database Control
- Data Management Objects: Database Maintenance
- Data Management Objects: Logging the Process
- Data Management Objects: Running SQL Statements
- Data Management Objects: Multiple Row Returns
- Data Management Objects: Other Database Objects
- Data Management Objects: Security
- Data Management Objects: Scripting
- Powershell and SQL Server - Overview
- PowerShell and SQL Server - Objects and Providers
- Powershell and SQL Server - A Script Framework
- Powershell and SQL Server - Logging the Process
- Powershell and SQL Server - Reading a Control File
- Powershell and SQL Server - SQL Server Access
- Powershell and SQL Server - Web Pages from a SQL Query
- Powershell and SQL Server - Scrubbing the Event Logs
- SQL Server 2008 PowerShell Provider
- SQL Server I/O: Importing and Exporting Data
- SQL Server I/O: XML in Database Terms
- SQL Server I/O: Creating XML Output
- SQL Server I/O: Reading XML Documents
- SQL Server I/O: Using XML Control Mechanisms
- SQL Server I/O: Creating Hierarchies
- SQL Server I/O: Using HTTP with SQL Server XML
- SQL Server I/O: Using HTTP with SQL Server XML Templates
- SQL Server I/O: Remote Queries
- SQL Server I/O: Working with Text Files
- Using Microsoft SQL Server on Handheld Devices
- Front-Ends 101: Microsoft Access
- Comparing Two SQL Server Databases
- English Query - Part 1
- English Query - Part 2
- English Query - Part 3
- English Query - Part 4
- English Query - Part 5
- RSS Feeds from SQL Server
- Using SQL Server Agent to Monitor Backups
- Reporting Services - Creating a Maintenance Report
- SQL Server Chargeback Strategies, Part 1
- SQL Server Chargeback Strategies, Part 2
- SQL Server Replication Example
- Creating a Master Agent and Alert Server
- The SQL Server Central Management System: Definition
- The SQL Server Central Management System: Base Tables
- The SQL Server Central Management System: Execution of Server Information (Part 1)
- The SQL Server Central Management System: Execution of Server Information (Part 2)
- The SQL Server Central Management System: Collecting Performance Metrics
- The SQL Server Central Management System: Centralizing Agent Jobs, Events and Scripts
- The SQL Server Central Management System: Reporting the Data and Project Summary
- Time Tracking for SQL Server Operations
- Migrating Departmental Data Stores to SQL Server
- Migrating Departmental Data Stores to SQL Server: Model the System
- Migrating Departmental Data Stores to SQL Server: Model the System, Continued
- Migrating Departmental Data Stores to SQL Server: Decide on the Destination
- Migrating Departmental Data Stores to SQL Server: Design the ETL
- Migrating Departmental Data Stores to SQL Server: Design the ETL, Continued
- Migrating Departmental Data Stores to SQL Server: Attach the Front End, Test, and Monitor
- Tracking SQL Server Timed Events, Part 1
- Tracking SQL Server Timed Events, Part 2
- Patterns and Practices for the Data Professional
- Managing Vendor Databases
- Consolidation Options
- Connecting to a SQL Azure Database from Microsoft Access
- SharePoint 2007 and SQL Server, Part One
- SharePoint 2007 and SQL Server, Part Two
- SharePoint 2007 and SQL Server, Part Three
- Querying Multiple Data Sources from a Single Location (Distributed Queries)
- Importing and Exporting Data for SQL Azure
- Working on Distributed Teams
- Professional Development
- Becoming a DBA
- Certification
- DBA Levels
- Becoming a Data Professional
- SQL Server Professional Development Plan, Part 1
- SQL Server Professional Development Plan, Part 2
- SQL Server Professional Development Plan, Part 3
- Evaluating Technical Options
- System Sizing
- Creating a Disaster Recovery Plan
- Anatomy of a Disaster (Response Plan)
- Database Troubleshooting
- Conducting an Effective Code Review
- Developing an Exit Strategy
- Data Retention Strategy
- Keeping Your DBA/Developer Job in Troubled Times
- The SQL Server Runbook
- Creating and Maintaining a SQL Server Configuration History, Part 1
- Creating and Maintaining a SQL Server Configuration History, Part 2
- Creating an Application Profile, Part 1
- Creating an Application Profile, Part 2
- How to Attend a Technical Conference
- Tips for Maximizing Your IT Budget This Year
- The Importance of Blue-Sky Planning
- Application Architecture Assessments
- Transact-SQL Code Reviews, Part One
- Transact-SQL Code Reviews, Part Two
- Cloud Computing (Distributed Computing) Paradigms
- NoSQL for the SQL Server Professional, Part One
- NoSQL for the SQL Server Professional, Part Two
- Object-Role Modeling (ORM) for the Database Professional
- Business Intelligence
- BI Explained
- Developing a Data Dictionary
- BI Security
- Gathering BI Requirements
- Source System Extracts and Transforms
- ETL Mechanisms
- Business Intelligence Landscapes
- Business Intelligence Layouts and the Build or Buy Decision
- A Single Version of the Truth
- The Operational Data Store (ODS)
- Data Marts – Combining and Transforming Data
- Designing Data Elements
- The Enterprise Data Warehouse — Aggregations and the Star Schema
- On-Line Analytical Processing (OLAP)
- Data Mining
- Key Performance Indicators
- BI Presentation - Client Tools
- BI Presentation - Portals
- Implementing ETL - Introduction to SQL Server 2005 Integration Services
- Building a Business Intelligence Solution, Part 1
- Building a Business Intelligence Solution, Part 2
- Building a Business Intelligence Solution, Part 3
- Tips and Troubleshooting
- SQL Server and Microsoft Excel Integration
- Tips for the SQL Server Tools: SQL Server 2000
- Tips for the SQL Server Tools – SQL Server 2005
- Transaction Log Troubles
- SQL Server Connection Problems
- Orphaned Database Users
- Additional Resources
- Tools and Downloads
- Utilities (Free)
- Tool Review (Free): DBDesignerFork
- Aqua Data Studio
- Microsoft SQL Server Best Practices Analyzer
- Utilities (Cost)
- Quest Software's TOAD for SQL Server
- Quest Software's Spotlight on SQL Server
- SQL Server on Microsoft's Virtual PC
- Red Gate SQL Bundle
- Microsoft's Visio for Database Folks
- Quest Capacity Manager
- SQL Server Help
- Visual Studio Team Edition for Database Professionals
- Microsoft Assessment and Planning Solution Accelerator
- Aggregating Server Data from the MAPS Tool
If you've been in the technology industry for any length of time, you've probably seen a project or two that ended up just a little past the limits of its design. You may have even been unfortunate enough to be part of one of those projects. What I'm talking about is a when a project is designed to fill a certain need, but isn't designed to be flexible enough to extend beyond its initial purpose.
In my past as a consultant, I saw this happen time and time again. Most of the time the IT staff felt that the problem wasn't their fault — but in fact it was. It is incumbent on us as data professionals not only to design a system that fits the current goal, but one that is as adaptable as possible given current technology, talent and budget. We need to think about the future and anticipate as many needs the organization might have as we can.
Of course, this clairvoyance only goes so far. We can't be expected to design a single system on a small budget for a department that can scale to a multi-purpose system that can serve an enterprise. But as the professional you should explain to the people who request your designs to find out what the ultimate purpose of the system will be. And don't take "this is all we will ever want this system to do" as an answer — that answer is almost never true. You'll need to plumb the corners and ask questions like "What other users might possibly be interested in this data? Where are they? What would they do with it?"
This overview deals with the data and platform design decisions for creating a system for reporting. In another article I've covered Reporting Services found in SQL Server 2000 as an add-on and 2005 as a built-in feature. You can use the information I'll give you here not only for a Reporting Services application, but in a broader sense as a platform for that application as well as any other.
Always Plan for Reporting
There are some design decisions that you can make almost every time as a "safe bet". It's a given that most system accept input, manipulate and store data, and produce output. And it's that output part where you can have a big impact on how you design your system.
Interestingly, this is the area where I have seen the most glaring problems in a system's design. In a typical organization, the It staff will be asked to design or implement a new system to handle some function. At the outset of the design, the primary focus from the user's side is the user interface — they want to know how the workflow will be laid out, which buttons to click, and generally how they handle their day-to-day tasks on the system. From the IT design side, the initial focus is often on the business logic. Since the IT group might not be as familiar with what that particular set of users does every day, they try to understand the rules that the users follow. After the initial design, the users are often not as involved as they were during the initial phase. The IT group, on the other hand, is on to the details of the system, which most of the time includes the data design. Often at the end of the process, reporting is put into the system, and very often, it's not as thorough a design discovery as the user interface screens are.
This is a mistake. The users will quickly get familiar with the screen layouts in an application, and any corrections there are often not a difficult task to work around or change. But soon after the data piles up, the users will begin to want reports from the system that they did not originally anticipate. And they will often run reports on the system in a frequency that the IT design group did not anticipate.
The results of this process are that the system begins to become overloaded, locks on the database begin to affect input, and things begin to slow down. The IT staff looks at the symptoms, but the real problem is that the reports were never "baked in", meaning that reporting isn't treated as a separate function.
So the first fix is to do just that — elevate the reporting function during your design. Make sure that the users understand just how much reporting they are liable to do from the system, and have them think that part of the process through.
Designing Reporting Into the System
To begin, you'll need to account for reporting in your system. It needs to be one of the central parts of your system design, even if you're putting together a small application.
In my articles on Business Intelligence, I talk a lot about developing a layout for your enterprise that can support intelligent analysis. The first level of these layout decisions is to ensure that your data is clean and that you are following all the basics of good design. Just after users are able to enter and trust the data in the system, they will begin to want to report on it, even if the data never ends up on paper. In those articles I continue on talking about the rest of the steps for creating an analytical system, but we'll stop at the reporting system in this overview. In fact, for many applications, intelligent report design can be used when many users suspect they need a Business Intelligence system.
So how do you design a reporting system into an application? You start with the data itself. Take a look at a typical table layout for most applications. Within those structures you'll see data that changes often, and other data that remains static. That's often the first place you can start. By breaking out those fields, you can handle the change-schedule of data that you will later want to report on. By separating the static from the rapidly changing data, you gain quite a few advantages.
The first advantage is that you automatically deal with many locking issues. Since the data doesn't change very often in some fields, putting them in another table keeps the table locks away from the tables where you put data that does change quickly.
Another advantage is that you can design better indexes. Indexes on quickly changing data take longer to update. By placing data that doesn't change as often in another table, you can index it more heavily (only where needed, of course) and make the indexes you do have to have on the quickly changing data smaller.
The final advantage in splitting the data this way is that you can handle imports and exports better. This brings up the next strategy for report data design.
At the very outset of the system design, you should think about making a copy of any data useful for reporting. I'll discuss a few distribution methods in a moment, but the important part is to think about the reporting data as a separate entity from the On-Line Transaction Processing (OLTP), even if they share data. OLTP systems are concerned with getting data in to the system – reporting is concerned with getting the data out. One is INSERT-heavy, the other is SELECT-heavy. This affects components all the way down to the disk controller. If you keep the tables and data apart, you can optimize those components.
This has coding implications as well. The end-application, whether it is in C# or a web page, should be designed so that the reporting function makes its own data connection, even if that is the same server as the OLTP function. Even better is to make that a configurable option, so that the administrator or even the user can set the server used for reports.
Selecting the Layout
Even if the system is small, you should think about the layout of the components in a logical rather than a physical layout. What that means is that you should determine the requirements, access methods and properties of the data for each component (data entry, data manipulation, and reporting) separately during your design.
Your system should be architected in such a way so that any time you need to you can add a reporting server. That server might be small at first, but by treating the report function separately, you should be able to move the reporting function from one system to another with little downtime.
From the architecture side, the most basic step is to carefully evaluate your data layout and separate it into tables that can be stored on separate Filegroups. That's the first method you can use to optimize your reporting onto separate devices. You instantly gain the ability to use separate drives for reporting, which frees up the drives you are using for the OLTP activity to work unencumbered by heavy reporting loads. Don't forget to separate at least the tempdb database, log files, data files and indexes onto separate physical drives as well.
The next level of architecture design has a great deal to do with how you plan to move the data from the OLTP system to the reporting system. Again, this is something you should plan for even when the system is small. You don't have to implement these features, but they should be part of the design documentation, so that the poor soul that has to extend your design has the ability to do that.
Programmatic Methods
Once the application grows and you begin to notice load and locking issues on your primary OTLP system, the first method you can use to offload the reporting function is to create a program, stored procedure, CLR or a batch job that runs periodically to copy data from the proper OLTP tables to a reporting server or even just another instance or database on the same server.
The advantage of this process is that can be inexpensive to implement. You don't have to buy any other hardware, and you can "starve" the other instance or database for resources so that reporting slows down, but the OLTP activity doesn't. You can use this method to transfer data to another server entirely, which mitigates any issues of staying on the same server hardware.
There are numerous disadvantages with this method, however. Since it's a manually-designed process, there are error points that are bound to pop up. You will need to document the process heavily so that everyone understands what is happening.
It's also a manual process to change. When the needs for reporting grow or extend, you'll possibly have to recode your entire process to accommodate the change, which might possibly cause yet another set of errors.
If you don't buy more hardware, you're still using the same resources for an increased load. Something has to give, and the tradeoff is that either the reporting has to be slower or the OLTP has to be slower. Nothing is free.
Finally, if you're using a timed or batched method, the data might be "stale", or old. Most users believe that they need the most current data for a report, but this isn't always true. Make sure you press them on this point – could the data be three hours old or even longer? If they say "no", then find out exactly which elements of the data has to be current, and batch that more often. You can often design a report on even the most fragmented table to be very responsive if you slice it up properly.
Triggers
Let's assume that the users tell you that the data they report on must be exactly the same as what went into the OLTP system only seconds before. In this case you can use a trigger on the columns of data that make the report.
There are some heavy concerns here. You should never put a lot of triggers on the database, because if you do you'll cause the same kind of load and locking issues you're trying to solve. Triggers also make errors hard to track down, especially if you nest them more than one level deep.
You can use triggers effectively, however. If you trigger only the last part of the process, perhaps when the user clicks the "submit" button, you can make a batch-type process. The big issue here is the latency of the data. You want to have enough currency on the data so that the users will use it, but not so current as to negatively impact the OLTP system. Most of the time, getting the data in the system is more important than getting it out.
Replication
A more painless solution (most of the time anyway) is to use SQL Server's Replication feature, built in to most every edition of SQL Server 2000 and 2005. In fact, even the MSDE (2000) and Express (2005) free editions can act as a subscriber to replication, and in many cases they make a great solution for a small reporting server.
I've covered replication in another tutorial, and you can read more about how to set it up here.
For reporting, I normally set up a one-way, read-only transaction replication. On the reporting server side, I include any tables that store user-preferences and so on that need to be written to from the reporting component. That way when users run reports not only do they not take locks on the current OLTP data, but they also don't write at all to the OLTP system.
Once again, the application needs to be flexible enough to point to various servers for the reporting feature.
Database Mirroring
Under certain circumstances, you need to be able to report off of most or even all of an OLTP system's data. In this case, you can set up a read-only copy of a copy of the database using SQL Server 2005's Database Mirroring feature. I've covered Database Mirroring in more depth here.
Even when you use this approach, the data is always read-only, so I use a second database and use three-part names in views and stored procedures. Here's a simple scenario of that process:
Let's assume I have database D1 that has a lot of data I want to report on. I set up the database to be mirrored, and then I create other tables on another database called D2 on the second server. So now on the other server I have a read-only database called D1, and a read-write database called D2. Assuming I need the "name" field from table T2 on database D2, and the "Status" field from table T1 on database D1, I just make a view that references the mirrored database. Since the databases are on the same server, there are almost no performance hits.
Selecting the Hardware
If you've designed your system properly, the hardware is a tuning feature. As I mentioned earlier, one of the most important components is the storage subsystem. You should lay out that storage properly, as I've described in this tutorial.
Another important component in a reporting system is memory. The more the better, because the query plans are reused in memory, and reporting is all about queries.
I have more advice about configuration here, and more about performance tuning here.
Don't Forget About the Network
Finally, you need to consider the network. Breaking out the reporting server is great, but you're still sending the same amount of data across the wire. Even worse, because your reporting is more efficient, users are liable to put more traffic on the wire. In addition, it may have been the network that was the reporting bottleneck to begin with.
You should always make sure your reporting system is on a different network segment if possible from your OLTP and maintenance traffic.
Informit Articles and Sample Chapters
If you're using Reporting Services on SQL Server 2000, check out this article.
Online Resources
Need to do some more tuning on your Reporting Services system? Check this article out.