- SQL Server Reference Guide
- Introduction
- SQL Server Reference Guide Overview
- Table of Contents
- Microsoft SQL Server Defined
- SQL Server Editions
- SQL Server Access
- Informit Articles and Sample Chapters
- Online Resources
- Microsoft SQL Server Features
- SQL Server Books Online
- Clustering Services
- Data Transformation Services (DTS) Overview
- Replication Services
- Database Mirroring
- Natural Language Processing (NLP)
- Analysis Services
- Microsot SQL Server Reporting Services
- XML Overview
- Notification Services for the DBA
- Full-Text Search
- SQL Server 2005 - Service Broker
- Using SQL Server as a Web Service
- SQL Server Encryption Options Overview
- SQL Server 2008 Overview
- SQL Server 2008 R2 Overview
- SQL Azure
- The Utility Control Point and Data Application Component, Part 1
- The Utility Control Point and Data Application Component, Part 2
- Microsoft SQL Server Administration
- The DBA Survival Guide: The 10 Minute SQL Server Overview
- Preparing (or Tuning) a Windows System for SQL Server, Part 1
- Preparing (or Tuning) a Windows System for SQL Server, Part 2
- Installing SQL Server
- Upgrading SQL Server
- SQL Server 2000 Management Tools
- SQL Server 2005 Management Tools
- SQL Server 2008 Management Tools
- SQL Azure Tools
- Automating Tasks with SQL Server Agent
- Run Operating System Commands in SQL Agent using PowerShell
- Automating Tasks Without SQL Server Agent
- Storage – SQL Server I/O
- Service Packs, Hotfixes and Cumulative Upgrades
- Tracking SQL Server Information with Error and Event Logs
- Change Management
- SQL Server Metadata, Part One
- SQL Server Meta-Data, Part Two
- Monitoring - SQL Server 2005 Dynamic Views and Functions
- Monitoring - Performance Monitor
- Unattended Performance Monitoring for SQL Server
- Monitoring - User-Defined Performance Counters
- Monitoring: SQL Server Activity Monitor
- SQL Server Instances
- DBCC Commands
- SQL Server and Mail
- Database Maintenance Checklist
- The Maintenance Wizard: SQL Server 2000 and Earlier
- The Maintenance Wizard: SQL Server 2005 (SP2) and Later
- The Web Assistant Wizard
- Creating Web Pages from SQL Server
- SQL Server Security
- Securing the SQL Server Platform, Part 1
- Securing the SQL Server Platform, Part 2
- SQL Server Security: Users and other Principals
- SQL Server Security – Roles
- SQL Server Security: Objects (Securables)
- Security: Using the Command Line
- SQL Server Security - Encrypting Connections
- SQL Server Security: Encrypting Data
- SQL Server Security Audit
- High Availability - SQL Server Clustering
- SQL Server Configuration, Part 1
- SQL Server Configuration, Part 2
- Database Configuration Options
- 32- vs 64-bit Computing for SQL Server
- SQL Server and Memory
- Performance Tuning: Introduction to Indexes
- Statistical Indexes
- Backup and Recovery
- Backup and Recovery Examples, Part One
- Backup and Recovery Examples, Part Two: Transferring Databases to Another System (Even Without Backups)
- SQL Profiler - Reverse Engineering An Application
- SQL Trace
- SQL Server Alerts
- Files and Filegroups
- Partitioning
- Full-Text Indexes
- Read-Only Data
- SQL Server Locks
- Monitoring Locking and Deadlocking
- Controlling Locks in SQL Server
- SQL Server Policy-Based Management, Part One
- SQL Server Policy-Based Management, Part Two
- SQL Server Policy-Based Management, Part Three
- Microsoft SQL Server Programming
- An Outline for Development
- Database
- Database Services
- Database Objects: Databases
- Database Objects: Tables
- Database Objects: Table Relationships
- Database Objects: Keys
- Database Objects: Constraints
- Database Objects: Data Types
- Database Objects: Views
- Database Objects: Stored Procedures
- Database Objects: Indexes
- Database Objects: User Defined Functions
- Database Objects: Triggers
- Database Design: Requirements, Entities, and Attributes
- Business Process Model Notation (BPMN) and the Data Professional
- Business Questions for Database Design, Part One
- Business Questions for Database Design, Part Two
- Database Design: Finalizing Requirements and Defining Relationships
- Database Design: Creating an Entity Relationship Diagram
- Database Design: The Logical ERD
- Database Design: Adjusting The Model
- Database Design: Normalizing the Model
- Creating The Physical Model
- Database Design: Changing Attributes to Columns
- Database Design: Creating The Physical Database
- Database Design Example: Curriculum Vitae
- NULLs
- The SQL Server Sample Databases
- The SQL Server Sample Databases: pubs
- The SQL Server Sample Databases: NorthWind
- The SQL Server Sample Databases: AdventureWorks
- The SQL Server Sample Databases: Adventureworks Derivatives
- UniversalDB: The Demo and Testing Database, Part 1
- UniversalDB: The Demo and Testing Database, Part 2
- UniversalDB: The Demo and Testing Database, Part 3
- UniversalDB: The Demo and Testing Database, Part 4
- Getting Started with Transact-SQL
- Transact-SQL: Data Definition Language (DDL) Basics
- Transact-SQL: Limiting Results
- Transact-SQL: More Operators
- Transact-SQL: Ordering and Aggregating Data
- Transact-SQL: Subqueries
- Transact-SQL: Joins
- Transact-SQL: Complex Joins - Building a View with Multiple JOINs
- Transact-SQL: Inserts, Updates, and Deletes
- An Introduction to the CLR in SQL Server 2005
- Design Elements Part 1: Programming Flow Overview, Code Format and Commenting your Code
- Design Elements Part 2: Controlling SQL's Scope
- Design Elements Part 3: Error Handling
- Design Elements Part 4: Variables
- Design Elements Part 5: Where Does The Code Live?
- Design Elements Part 6: Math Operators and Functions
- Design Elements Part 7: Statistical Functions
- Design Elements Part 8: Summarization Statistical Algorithms
- Design Elements Part 9:Representing Data with Statistical Algorithms
- Design Elements Part 10: Interpreting the Data—Regression
- Design Elements Part 11: String Manipulation
- Design Elements Part 12: Loops
- Design Elements Part 13: Recursion
- Design Elements Part 14: Arrays
- Design Elements Part 15: Event-Driven Programming Vs. Scheduled Processes
- Design Elements Part 16: Event-Driven Programming
- Design Elements Part 17: Program Flow
- Forming Queries Part 1: Design
- Forming Queries Part 2: Query Basics
- Forming Queries Part 3: Query Optimization
- Forming Queries Part 4: SET Options
- Forming Queries Part 5: Table Optimization Hints
- Using SQL Server Templates
- Transact-SQL Unit Testing
- Index Tuning Wizard
- Unicode and SQL Server
- SQL Server Development Tools
- The SQL Server Transact-SQL Debugger
- The Transact-SQL Debugger, Part 2
- Basic Troubleshooting for Transact-SQL Code
- An Introduction to Spatial Data in SQL Server 2008
- Performance Tuning
- Performance Tuning SQL Server: Tools and Processes
- Performance Tuning SQL Server: Tools Overview
- Creating a Performance Tuning Audit - Defining Components
- Creating a Performance Tuning Audit - Evaluation Part One
- Creating a Performance Tuning Audit - Evaluation Part Two
- Creating a Performance Tuning Audit - Interpretation
- Creating a Performance Tuning Audit - Developing an Action Plan
- Understanding SQL Server Query Plans
- Performance Tuning: Implementing Indexes
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 1
- Performance Monitoring Tools: Windows 2008 (and Higher) Server Utilities, Part 2
- Performance Monitoring Tools: Windows System Monitor
- Performance Monitoring Tools: Logging with System Monitor
- Performance Monitoring Tools: User Defined Counters
- General Transact-SQL (T-SQL) Performance Tuning, Part 1
- General Transact-SQL (T-SQL) Performance Tuning, Part 2
- General Transact-SQL (T-SQL) Performance Tuning, Part 3
- Performance Monitoring Tools: An Introduction to SQL Profiler
- Performance Tuning: Introduction to Indexes
- Performance Monitoring Tools: SQL Server 2000 Index Tuning Wizard
- Performance Monitoring Tools: SQL Server 2005 Database Tuning Advisor
- Performance Monitoring Tools: SQL Server Management Studio Reports
- Performance Monitoring Tools: SQL Server 2008 Activity Monitor
- The SQL Server 2008 Management Data Warehouse and Data Collector
- Performance Monitoring Tools: Evaluating Wait States with PowerShell and Excel
- Practical Applications
- Choosing the Back End
- The DBA's Toolbox, Part 1
- The DBA's Toolbox, Part 2
- Scripting Solutions for SQL Server
- Building a SQL Server Lab
- Using Graphics Files with SQL Server
- Enterprise Resource Planning
- Customer Relationship Management (CRM)
- Building a Reporting Data Server
- Building a Database Documenter, Part 1
- Building a Database Documenter, Part 2
- Data Management Objects
- Data Management Objects: The Server Object
- Data Management Objects: Server Object Methods
- Data Management Objects: Collections and the Database Object
- Data Management Objects: Database Information
- Data Management Objects: Database Control
- Data Management Objects: Database Maintenance
- Data Management Objects: Logging the Process
- Data Management Objects: Running SQL Statements
- Data Management Objects: Multiple Row Returns
- Data Management Objects: Other Database Objects
- Data Management Objects: Security
- Data Management Objects: Scripting
- Powershell and SQL Server - Overview
- PowerShell and SQL Server - Objects and Providers
- Powershell and SQL Server - A Script Framework
- Powershell and SQL Server - Logging the Process
- Powershell and SQL Server - Reading a Control File
- Powershell and SQL Server - SQL Server Access
- Powershell and SQL Server - Web Pages from a SQL Query
- Powershell and SQL Server - Scrubbing the Event Logs
- SQL Server 2008 PowerShell Provider
- SQL Server I/O: Importing and Exporting Data
- SQL Server I/O: XML in Database Terms
- SQL Server I/O: Creating XML Output
- SQL Server I/O: Reading XML Documents
- SQL Server I/O: Using XML Control Mechanisms
- SQL Server I/O: Creating Hierarchies
- SQL Server I/O: Using HTTP with SQL Server XML
- SQL Server I/O: Using HTTP with SQL Server XML Templates
- SQL Server I/O: Remote Queries
- SQL Server I/O: Working with Text Files
- Using Microsoft SQL Server on Handheld Devices
- Front-Ends 101: Microsoft Access
- Comparing Two SQL Server Databases
- English Query - Part 1
- English Query - Part 2
- English Query - Part 3
- English Query - Part 4
- English Query - Part 5
- RSS Feeds from SQL Server
- Using SQL Server Agent to Monitor Backups
- Reporting Services - Creating a Maintenance Report
- SQL Server Chargeback Strategies, Part 1
- SQL Server Chargeback Strategies, Part 2
- SQL Server Replication Example
- Creating a Master Agent and Alert Server
- The SQL Server Central Management System: Definition
- The SQL Server Central Management System: Base Tables
- The SQL Server Central Management System: Execution of Server Information (Part 1)
- The SQL Server Central Management System: Execution of Server Information (Part 2)
- The SQL Server Central Management System: Collecting Performance Metrics
- The SQL Server Central Management System: Centralizing Agent Jobs, Events and Scripts
- The SQL Server Central Management System: Reporting the Data and Project Summary
- Time Tracking for SQL Server Operations
- Migrating Departmental Data Stores to SQL Server
- Migrating Departmental Data Stores to SQL Server: Model the System
- Migrating Departmental Data Stores to SQL Server: Model the System, Continued
- Migrating Departmental Data Stores to SQL Server: Decide on the Destination
- Migrating Departmental Data Stores to SQL Server: Design the ETL
- Migrating Departmental Data Stores to SQL Server: Design the ETL, Continued
- Migrating Departmental Data Stores to SQL Server: Attach the Front End, Test, and Monitor
- Tracking SQL Server Timed Events, Part 1
- Tracking SQL Server Timed Events, Part 2
- Patterns and Practices for the Data Professional
- Managing Vendor Databases
- Consolidation Options
- Connecting to a SQL Azure Database from Microsoft Access
- SharePoint 2007 and SQL Server, Part One
- SharePoint 2007 and SQL Server, Part Two
- SharePoint 2007 and SQL Server, Part Three
- Querying Multiple Data Sources from a Single Location (Distributed Queries)
- Importing and Exporting Data for SQL Azure
- Working on Distributed Teams
- Professional Development
- Becoming a DBA
- Certification
- DBA Levels
- Becoming a Data Professional
- SQL Server Professional Development Plan, Part 1
- SQL Server Professional Development Plan, Part 2
- SQL Server Professional Development Plan, Part 3
- Evaluating Technical Options
- System Sizing
- Creating a Disaster Recovery Plan
- Anatomy of a Disaster (Response Plan)
- Database Troubleshooting
- Conducting an Effective Code Review
- Developing an Exit Strategy
- Data Retention Strategy
- Keeping Your DBA/Developer Job in Troubled Times
- The SQL Server Runbook
- Creating and Maintaining a SQL Server Configuration History, Part 1
- Creating and Maintaining a SQL Server Configuration History, Part 2
- Creating an Application Profile, Part 1
- Creating an Application Profile, Part 2
- How to Attend a Technical Conference
- Tips for Maximizing Your IT Budget This Year
- The Importance of Blue-Sky Planning
- Application Architecture Assessments
- Transact-SQL Code Reviews, Part One
- Transact-SQL Code Reviews, Part Two
- Cloud Computing (Distributed Computing) Paradigms
- NoSQL for the SQL Server Professional, Part One
- NoSQL for the SQL Server Professional, Part Two
- Object-Role Modeling (ORM) for the Database Professional
- Business Intelligence
- BI Explained
- Developing a Data Dictionary
- BI Security
- Gathering BI Requirements
- Source System Extracts and Transforms
- ETL Mechanisms
- Business Intelligence Landscapes
- Business Intelligence Layouts and the Build or Buy Decision
- A Single Version of the Truth
- The Operational Data Store (ODS)
- Data Marts – Combining and Transforming Data
- Designing Data Elements
- The Enterprise Data Warehouse — Aggregations and the Star Schema
- On-Line Analytical Processing (OLAP)
- Data Mining
- Key Performance Indicators
- BI Presentation - Client Tools
- BI Presentation - Portals
- Implementing ETL - Introduction to SQL Server 2005 Integration Services
- Building a Business Intelligence Solution, Part 1
- Building a Business Intelligence Solution, Part 2
- Building a Business Intelligence Solution, Part 3
- Tips and Troubleshooting
- SQL Server and Microsoft Excel Integration
- Tips for the SQL Server Tools: SQL Server 2000
- Tips for the SQL Server Tools – SQL Server 2005
- Transaction Log Troubles
- SQL Server Connection Problems
- Orphaned Database Users
- Additional Resources
- Tools and Downloads
- Utilities (Free)
- Tool Review (Free): DBDesignerFork
- Aqua Data Studio
- Microsoft SQL Server Best Practices Analyzer
- Utilities (Cost)
- Quest Software's TOAD for SQL Server
- Quest Software's Spotlight on SQL Server
- SQL Server on Microsoft's Virtual PC
- Red Gate SQL Bundle
- Microsoft's Visio for Database Folks
- Quest Capacity Manager
- SQL Server Help
- Visual Studio Team Edition for Database Professionals
- Microsoft Assessment and Planning Solution Accelerator
- Aggregating Server Data from the MAPS Tool
Years ago I worked on a mainframe system. Although that technology was mature even when I worked with it, in the early days “the computer” (if a company even had one) was a huge affair, with a purpose-built room and a limited set of applications. When someone used an application, they normally entered data on a terminal, which was a simple keyboard and textual screen. To see the results, they often sent the results to a print job for spooling out to a printer somewhere in a locked room, and they would pick it up or had it delivered to their office. As you can imagine, this was a very expensive set of things to do, not only from the cost of the application, but the power, network cables, printers, and other hardware, not to mention the people involved.
Built right in to the operating system of many of these mainframes were several log entries showing not only who logged on to the system, but how long they stayed connected, which applications they ran as well as a way to tie all that information back to how much CPU, memory and storage they used. This information was used to “charge back” the cost of operating the system to the user’s department.
Things have certainly changed. We all have computers now, with even the most humble laptop having far more power than those mainframes of the past. We rarely print, since most of the information is not only entered into an application but viewed there as well.
And yet some things remain the same. Even with “open source” software, technology isn’t free — there’s a cost associated with using it. We still need special equipment, facilities and staff to run the IT departments where we work. And many of our applications have moved away from the desktop into what we’re now calling “the cloud,” which has remarkable similarities to the mainframe days of old.
Business users are now far more computer-savvy. They have combined spreadsheets, portals, shared workspaces and the like which they want to access and create at will. But again, nothing is free — even if the user can simply click a button to create a new SharePoint site, a little more memory is used on the server, another database is created underneath it, there’s more maintenance time needed, backup licenses are increased and so on. At some point, the IT department has to add hardware, licenses and staff to handle the increased load on the servers.
And who pays this cost? Normally the entire business does, out of profits. This is often referred to as a “tax” model — everyone is taxed for the same roads, whether they drive them or not. Somewhere on a balance sheet, the cost for buildings, electricity, and yes, even IT is recorded. The IT manager plans for the budget amounts, and brings them to a business committee for approval. Since no one gets all the budget amounts they ask for, services or goods are cut. And yet the demand from the business increases. The IT manager, stuck between higher demand from the business but stagnant or even decreased budgets, is looking for a way to charge the users for the amount of IT resources they are using. This is called a “toll” model. In this model only the people who drive on the road are charged for it, which is exactly what the IT manager is looking for from his or her technology requests.
Consider also the “hosting” provider, whose customers create demand that the provider doesn’t control and can’t predict. Since they don’t have each user’s business to pay the “tax,” they must resort to a “toll” model.
So that brings us to the discussion of chargeback, just like we had in the mainframe days. Since technology has changed dramatically from that day, this brings up some interesting questions — some from the technical implementation of chargeback, and some from the political or business decisions that go along with it.
When to Implement a Chargeback System
Businesses commonly use a “spread” model for their IT costs. They take the complete IT budget, and divide that by the number of employees in the company, coming up with a “per seat” cost for IT. If the business absorbs this number as a standard “cost of doing business” budget calculation, this is the “tax” model I mentioned earlier. They may, however, charge each department’s user count against that per-user number, decreasing that department’s profit. This is more akin to the “tolls” model, and is in fact a chargeback system. Many colleges use this model — here is a link to an example of that kind of charge and calculation.
If you’re in one of the situations I described earlier, you might consider a chargeback system for your enterprise. The basic premise is that you have a budget reason driving the decision — you are asked to provide more resources to specific business units and you don’t have the budget backing to implement those resources. Using a chargeback system, you can ask each department to “pay its own way” — especially those that use more IT resources.
For instance, on a farm or in a factory, some employees may never directly use the computing system or have any demands for more IT resources. Management functions and finance, on the other hand, disproportionally use those resources. The managers held to a profit number on a single farm or manufacturing location may not want to have their profit number charged for IT resources they do not directly use.
But there is another use for a chargeback system, even if you don’t implement the charging part. You can use a chargeback system to track the use of your systems, and then provide that to the business so that they understand the true cost/benefit ratio a particular business unit provides.
Understand that a chargeback system isn’t free — it has design-time costs, increase manpower requirements, needs increased resources, and the monitoring it enacts will have an impact on performance. You have to weigh the cost of implementing a chargeback system against the benefits it provides.
In many cases I’ve see a chargeback system used to justify an increase in IT resources, and then dropped once those resources are procured. The system can be “resurrected” whenever a new justification is needed.
So you can implement the chargeback system as a continuous process, or one that you start, stop and restart whenever you need it. You can charge the departments directly or pass along the information to the business so that they can see which departments or applications are using the most IT resources — something they should consider anyway.
The Chargeback Strategy Methodology
Once you’ve made the decision to implement a chargeback system, you begin its design with a question of what you want to charge for.
A very pertinent question is whether this is a data platform issue alone. After all, there are multiple layers in modern applications, from the cost of the desktop, network and server hardware all the way through the software costs of the licenses that run on that hardware and the personnel required to implement and support all of the layers back to the data platform.
But there is a strong argument for implementing a chargeback system at the data layer. No database exists for itself — it always has some sort of application (even if it is only a script) that uses the data it contains. It also uses all of types of resources within a server and network to answer application requests. And since the application is the ultimate user of the database, those requests are more easily tracked and accounted for. The database is also an expensive component with the application system, requiring hardware, software and personnel. Finally, the database uses the file system to store the data for the application, which is also easily tracked.
This brings up an interesting point. A chargeback system is largely a tracking exercise. Using the methodology in this article, you will choose what to track, how to track it, and how to report on the tracking.
The second part of the chargeback system is to determine what to charge for each element you track. This is highly variable, and depends a great deal on the spread of “taxes” versus “tolls” that you want to charge back to the business department. There is some level of cost the business must carry to have IT in the building — things like power, the management staff, and even the tracking exercise itself. These are elements that are common to the business function just like utilities, phones and other parts of infrastructure of the business.
The process below will help you determine which level of tracking you want to provide, and then you will work with the business to determine the cost of each component, summing those per measured department or user. Combining these two parts creates an element where you can assign a cost, dividing each element use by time or unit. This is similar to a Return On Assets (ROA) calculation. In this type of calculation, you consider the cost of the operation of the application, and then divide that by the number of users and their use of the system. I’ll provide a simple model implementation of this methodology and a more complex version in a moment for you to use as a guide.
If you use this model, every department ends up with a “per unit” rather than “per seat” cost for IT. There are several factors to consider in this cost model:
- Hardware Cost (including initial purchase and maintenance)
- Software Licenses and Maintenance
- Personnel
- Power
- Other infrastructure requirements (extended security or fire suppression systems)
- High Availability
- Disaster Response
These costs are not “fixed.” It is assumed that you will depreciate the costs of the hardware over time and buy new hardware; there will be consolidation of resources, the personnel costs change over time and so on. Normally you “smooth” these costs into a single number that is adjusted every two to three years, creating a new chargeback rate. But you do need to break down some of the costs, such as the database server, into a variable number than can be tracked. After all, it’s difficult to break down how much power, licensing and fire suppression is used for a single request!
So the first part of the methodology is to record these general costs, along with any others that are particular to your situation. Depending on the method you choose for assigning these costs, you’ll amortize the cost over a unit of component measurement (such as CPU or memory) or create an absolute value for that cost. I’ll explain this decision further in the examples I’ll show you in the next installment of this article.
Determining the Components to Track
The next step is to break down those cost areas into the components involved for each. For the costs that do not vary for a single request, you can simply amortize the amount per year, and fold it in to the final calculation. For the costs that do vary per request, you need to detail the components that are used.
For the server there are four major components involved in the application call to the database:
- CPU
- Memory
- I/O
- Network
The CPU element can be tracked at many levels — at the server, or down at the database level. Most database chargeback schemes track to the database level, since you may have multiple Instances of SQL Server installed on a single server, and you’ll want to charge each application user only for their use of the CPU element, not for things like backup or maintenance time.
Memory is another element that can be tracked per user and request. It is added to the mix of the variable components to be tracked. Once again, the database memory used is the best metric to track.
I/O, or the disk subsystem, actually has two sub-components. The first is the amount of storage, and the other is the transfer of data back and forth. You’re able to track both, at the database level. If you notice that the application is storage intensive in a forward-growth pattern, then you will want to track only the file size growth. If the application only grows the data slightly, or adds and removes data in a consistent pattern, then you should track only I/O transfers (reads and writes). If it does both, track both elements.
The network transfer is probably the most difficult metric to track, since this depends on how the traffic is generated and routed. Although it is possible to track at the database level, it requires a great deal of instrumenting. Most of the time this metric is not used in chargeback systems directly, but is part of the amortized cost.
You don’t have to track each and every one of these metrics. In the simple model I’ll show you in a moment, I’ll track only the CPU and Memory use per application. The rest of the costs will simply be divided into the calculation to create the cost for the application.
You may, however, decide to put a very fine point on the tracking, so that you have a very accurate cost model. If you decide to go this route, make sure that the planning, implementation and maintenance of the tracking process is worth the effort.
Determining the Granularity of the Component Tracking
The next part of the methodology is to decide which level of detail you want to track and how often you want to track it. For instance, for the I/O file growth, you may want to take a weekly or monthly measurement and simply compare the start and end values. If you measure the CPU use, you will probably want to measure far more often, even if you report on it monthly.
Determining the Owner of the Tracking Component
Next, you will need to find out if you can track the component back to the user or application. This is not always possible, such as the considerations for High Availability or power. If the component does not have a single “owner,” that makes the component a candidate to be included only in the amortized cost value.
In many cases you can track the user or at least the application back to the calling transaction, given a few conditions:
- The application name is available (set by the developer of the application)
- The user is using a login to the database server, not an application or common login
- The transactions are not involved in a “middle tier” system that mixes the calls between applications to the same data source
Most of the time the application you’re tracking is hooked directly to a database, so all calls to that database are by a single department or set of users, so tracking down the user is not difficult.
Selecting Tools and Processes for Tracking
The simplest part of the process is selecting the tools you need to track the calls to the database and each component’s use. You have multiple tools to choose from, and in this article I’ll stick with those you have available in the operating system and SQL Server, focusing on version 2008 which has enhanced tracking capabilities, although many of the features I’ll reference are available in earlier versions as well.
In many ways the chargeback system is similar to Performance Tuning, and in fact it is even simpler to create and implement. In fact, you may be able to pull all of the metrics you need from the tracking you’re already doing for Performance Tuning.
For instance, assume that you’re looking for the file growth and you’re using the Management Data Warehouse feature in SQL Server 2008. This query will pull the start and ending database sizes from the monitoring database that the system already provides:
SELECT database_name , SUM(num_of_reads) AS 'Reads' , SUM(num_of_writes) AS 'Writes' , SUM(num_of_reads) + SUM(num_of_writes) AS 'TotalIO' FROM snapshots.io_virtual_file_stats GROUP BY database_name ORDER BY TotalIO DESC
But even if you are not using a defined monitoring system for performance, you can use other features to track database use. Note that you will probably rely on a mix of tools and processes to collect the tracking data, so you should read and understand each of these before you develop your solution. In the next installment of this tutorial I’ll show you how to implement many of these features.
The Windows Server Operating System
If your application is tied directly to a database, you can use the Windows operating system to track the use of the system.
The first option you have is the Windows System Resource Monitor (WSRM), available in the Enterprise or Datacenter editions of the operating system. If you’re not familiar with this tool, check this link to learn how to implement it.
To use this feature to track memory, CPU and I/O requests for SQL Server, use the wsrm command line tool with this query once you’ve turned on the monitoring to see the data it has collected for SQL Server:
wsrmc /get:acc /where:"[process name] exactly equals 'sqlservr.exe'" /groupby:"command line" \\<servername>
The “servername” variable is the name of your SQL Server system. Although the WSRM feature does not use SQL Server as a storage engine, you can import the data from that command into tables with the same column names that it reports, using the documentation provided at that reference I just mentioned. From there, you can query the data this way:
SELECT [Process Id], [Creation Time], Max([Policy Set Time]) as 'Policy Set Time', Max([Time Stamp]) as 'Time Stamp', Max([Process Name]) as 'Process Name', Max([Process Matching Criteria]) as 'Process Matching Criteria', Max([Policy Name]) as 'Policy Name', Max([Executable Path]) as 'Executable Path', Max([User]) as 'User', Max([Domain]) as 'Domain', Max([Command Line]) as 'Command Line', (Max([Elapsed Time]) - Min([Elapsed Time])) as 'Elapsed Time', (Max([Kernel Mode Time]) - Min([Kernel Mode Time])) as 'Kernel Mode Time', (Max([User Mode Time]) - Min([User Mode Time])) as 'User Mode Time', (Max([Total CPU Time]) - Min([Total CPU Time])) as 'Total CPU Time', Avg([Thread Count]) as 'Thread Count', Max([Session Id]) as 'Session Id', Max([Peak Virtual Size]) as 'Peak Virtual Size', Avg([Virtual Size]) as 'Virtual Size', (Max([Page Fault Count]) - Min([Page Fault Count])) as 'Page Fault Count', Avg([Private Page Count]) as 'Private Page Count', Max([Peak Working Set Size]) as 'Peak Working Set Size', Avg([Working Set Size]) as 'Working Set Size', Avg([Page File Usage]) as 'Page File Usage', Max([Peak Page File Usage]) as 'Peak Page File Usage', (Max([Read Operation Count]) - Min([Read Operation Count])) as 'Read Operation Count', (Max([Read Transfer Count]) - Min([Read Transfer Count])) as 'Read Transfer Count', (Max([Write Operation Count]) - Min([Write Operation Count])) as 'Write Operation Count', (Max([Write Transfer Count]) - Min([Write Transfer Count])) as 'Write Transfer Count', (Max([Other Operation Count]) - Min([Other Operation Count])) as 'Other Operation Count', (Max([Other Transfer Count]) - Min([Other Transfer Count])) as 'Other Transfer Count', Avg([Quota Non Paged Pool Usage]) as 'Quota Non Paged Pool Usage', Avg([Quota Paged Pool Usage]) as 'Quota Paged Pool Usage', Max([Quota Peak Non Paged Pool Usage]) as 'Quota Peak Non Paged Pool Usage', Max([Quota Peak Paged Pool Usage]) as 'Quota Peak Paged Pool Usage' FROM <Accounting Raw data source> WHERE (NOT ([Creation Time] is NULL) AND [Time Stamp] >= ‘<Scope Start Date>’ AND [Time Stamp] < ‘<Scope End Date>’ ) GROUP BY [Creation Time], [Process Id], [Policy Name], [Policy Set Time], [Process Matching Criteria]) T
You can also use the Windows System Monitor (sometimes incorrectly referred to as “Perfmon”) to track SQL Server access. You have a finer grain of control with the reporting using this tool as far as the Instances and databases on the system, but less visibility into the users and processes. Still, in a simple example this process works well.
If you’re not familiar with this tool, you can read more about it here. I recommend that you track the data into a SQL Server Instance, separate from the one you’re monitoring. The objects and counters that are relevant for chargeback tracking are as follows:
Object |
Counter |
Description |
SQLServer:Databases |
Active Transactions |
Number of active transactions for the database |
SQLServer:Databases |
Data File(s) Size (KB) |
Cumulative size (in kilobytes) of all the data files in the database |
SQLServer:General Statistics |
User Connections |
Counts the number of users currently connected to SQL Server. |
SQLServer:Transactions |
Transactions |
The number of currently active transactions |
SQL Server System Views and Dynamic Management Views
You can also run queries from SQL Server directly to access the same System Monitor counters in the table above. For older systems, such as SQL Server 2000, you can use the sys.sysperfinfo system table:
SELECT * FROM sysperfinfo; GO
For SQL Server 2005 and higher, you can use the new sys.dm_os_performance_counters Dynamic Management View to get the objects and counters:
SELECT * FROM sys.dm_os_performance_counters; GO
In both cases, you can output the results of the query to a table in another database, and then query that for the historic data for the chargeback.
These tables and views are still only hitting the Windows System Monitor counters. If you’re implementing the chargeback solution in SQL Server using Transact-SQL queries, you should use other Dynamic Management Views (DMV), functions and tables to find not only the measurements, but the user or application data as well. Here are the meta-data sources you can use to find data on CPU, I/O, memory and network use for a chargeback system:
Source |
Type |
Description |
sys.dm_db_file_space_usage |
DMV |
Gives you space usage information for each file in the database |
sys.database_files |
System Table |
Shows space used by each file |
sys.dm_io_virtual_file_stats |
DMV |
I/O Read and write information |
sys.dm_exec_sessions |
DMV |
Shows information about all active user connections and internal tasks. This information includes client and program names, login times, memory and I/O use |
Once again, I’ll show you how I implement a combination of these views for a tracking feature in SQL Server for both a simple and a complex example of chargeback in the next tutorial in this series.
SQL Server Profiler and SQL Trace
The SQL Server Profiler tool can “watch” the activity on the server and record the information to a file called a trace file. It can also store the data directly in a monitoring database, or you can export the data from the trace file to the database at a later time. SQL Trace is the command version of this graphical tool. I’ve explained how to use the SQL Server Profiler in another tutorial that you can find here.. You can find more on SQL Trace here.
In both cases, the pertinent event classes to watch for your chargeback system are as follows:
Event Class |
Description |
Audit Login |
Tracks user logins |
Audit Logout |
Tracks user logouts |
SQL:BatchStarting |
Shows the start of a SQL batch |
SQL:BatchCompleted |
Shows the start of a SQL batch |
SQL Server Audit
Beginning in SQL Server 2008, Microsoft includes a new auditing feature called Extended Events. While you can also use this feature to create your chargeback system, there is another feature called SQL Server Audit that is built on Extended Events. This feature is useful because you can use it to track objects all the way down to SELECT statements and the like.
To use this feature, you create a SQL Server Audit Object or Database Audit Object, depending on what you want to track. Server and Database Audit Objects that are interesting for a chargeback solution include:
Object Type |
Object Name |
Description |
Server |
SUCCESSFUL_LOGIN_GROUP |
Indicates that a principal has successfully logged in to SQL Server |
Server |
LOGOUT_GROUP |
Indicates that a principal has logged out of SQL Server |
Database |
SELECT |
This event is raised whenever a SELECT is issued. |
Database |
UPDATE |
This event is raised whenever an UPDATE is issued. |
Database |
INSERT |
This event is raised whenever an INSERT is issued. |
Database |
DELETE |
This event is raised whenever a DELETE is issued. |
Database |
EXECUTE |
This event is raised whenever an EXECUTE is issued. |
SQL Server Data Collector
Earlier I referenced this new SQL Server feature, and I’ve written an article on the Data Collector here. I won’t cover that again in this tutorial, since it really is a method to automate the collection of Windows System Counters, Transact-SQL Statements and Profiler Trace Events into a single database. But because of this very behavior, it makes this feature an ideal candidate for your chargeback system. By implementing custom collectors, you can track everything in one place, automatically, with a rolling data archival schedule built right in, all to a central location. Not only that, you can export the custom collectors to other systems to make a repeatable methodology for your process. This is everything you’re looking for in a chargeback system.
The catch is that the Data Collector is a SQL Server 2008 Enterprise feature only — you have to have both the right version and the right edition to make that work. But if you have that environment, this is the route you should follow to create your system.
Reporting the Tracking Results
Regardless on which tool or mix of tools you use to track the data you want to charge back to the organization, you need some way of getting that data to them.
One of the simplest methods you can use is to export the data using either SQL Server Integration Services or Reporting Services into an Excel spreadsheet. The beauty of this approach is that many business budgets are done in Excel anyway, so giving them the data this way integrates into their current processes.
You can also create a report using either HTML with custom code or by using Reporting Services that displays the information along with the charges, posted to a location the appropriate parties can access.
Another method is to export the data into another format that the users can import into their own systems.
In any case, you want to make sure you archive this data, so that the department can periodically review the systems use and its associated cost for future budget forecasting. This helps the business evaluate the true cost of operations.
Building your chargeback system is not a “cookie-cutter” approach, using a single template or tool that fits all situations. It’s a process that you and your business contacts will work out over time that fits the needs you have to track and bill for your system resources. In the next tutorial I’ll show you some examples of the measurements, collection methods and reporting examples for a chargeback system.
InformIT Articles and Sample Chapters
In The IT Utility Model — Part 1, Sun Microsystems describes chargeback systems.
Books and eBooks
The work referenced above is Consolidation in the Data Center: Simplifying IT Environments to Reduce Total Cost of Ownership.
Online Resources
In Planning for Consolidation with Microsoft SQL Server 2000, my good friend Allen Hirt collaborated on a whitepaper on consolidation strategies which includes some great stored procedures for chargeback.