- SQL Server Management Studio
- SQL Server 2005 Infrastructure Management
- Operations Management Tools
- SQL Server 2005 Remote Management Features
- SQL Server Monitoring
- Managing Very Large Databases
- SQL Server Replication Enhancements
- High Availability for the Masses
- General Data Availability
- Indexed View
- Common Language Runtime and Database Administrator
- Business Intelligence and the Database Administrator
- Summary
SQL Server Monitoring
Server monitoring is a challenge for SQL Server administrators. On average, SQL Server DBAs are responsible for twice as many servers as their peers using other platforms. The DBA also has a number of tasks to accomplish. How do DBAs get everything done in a day? They use the scripting languages just mentioned to automate their processes.
In an ideal world, databases would be self-monitoring. The database itself would kill a hanging process automatically. The creation of a new database would cause the automatic creation of backup jobs. The database would do what’s needed to keep query performance above a certain accepted level. Unfortunately, none of the database providers are even close to providing this functionality. That’s why DBAs are still needed.
There are two kinds of monitoring scenarios:
- Reactive monitoring deals more with the resolution of existing issues or those that crop up.
- Proactive monitoring is the process of looking at a current server state and making estimates and plans for changes to the underlying objects. The objective is to prevent issues, increase scalability, and maintain availability.
This section divides the scenarios into reactive and proactive monitoring. Reactive technologies are discussed more because that venue has significant changes.
Reactive Monitoring
Reactive tools have only one true purpose: to fix a problem as quickly as possible. SQL Server 2005 has a number of new tools for looking into the server to diagnose and resolve problems. Finding the root cause of a problem and fixing the issue requires that the server provide information—a sort of health report on what’s happening. Often, an issue appears on a server, and the DBA doesn’t have anything to use for a diagnosis. (It’s like when you bring your car to your mechanic with a complaint about a funny noise. Unless it’s something loud and horrible, the sound disappears the minute the mechanic gets behind the wheel. You’re left with that "I’m not crazy!" feeling.) With SQL Server 2005, the new Dynamic Management Views don’t just provide a way to look under the hood. They provide historical aggregation data for processes running under the hood. A DBA trying to solve a problem will have at least some basic clues to start the investigation.
Database Catalog Views
To better protect SQL Server databases, the system catalogs have been locked down. They can’t be changed. Also, the system catalogs aren’t universally visible. SQL Server provides a new set of SQL views that provide information about the system catalog. These views are read-only and are designed to show the metadata. The following is a list of the available catalog views. You can query the catalog views by looking up the appropriate system function and executing the query with the appropriate select statement filters. The naming convention for the catalog views is user-friendly.
Partition Function Catalog Views
Data Spaces and Fulltext Catalog Views
Common Language Runtime (CLR) Assembly Catalog Views
Scalar Types Catalog Views
Objects Catalog Views
Messages (for errors) Catalog Views
Service Broker Catalog Views
HTTP Endpoints Catalog Views
Server-Wide Configuration Catalog Views
Databases and Files Catalog Views
Schemas Catalog Views
Security Catalog Views
Database Mirroring Catalog Views
XML Schemas (XML type system) Catalog Views
Linked Servers Catalog Views
Extended Properties Catalog Views
Here's a simple of example of querying the Database Level View for principals:
Select type_desc,name,default_schema_name From sys.database_principals where type_desc like 'sql_user'
SQL Server Profiler
SQL Server Profiler provides a mechanism for capturing workloads on a server and replaying them. Profiler uses something called a trace file to capture and replay queries on the server. (You can learn more about the trace file in SQL Server Books Online. Reading this topic is well worth the time.) A number of improvements, including client trace and CPU-level trace elements, have been included. Profiler can sort and review stored procedures and all the executions on a server. SQL Server Profiler lets you do the following:
- Step through problem queries to find the cause of the problem.
- Find and diagnose slow-running queries.
- Capture the series of SQL statements that lead to a problem. The saved trace can then be used to replicate the problem on a test server where the problem can be diagnosed.
- Monitor the performance of SQL Server to tune workloads.
The data used by Profiler is captured in a trace file. The trace file contains elements that are recorded. It also contains information about which queries are executed, how long they are executed, and on what server. With SQL Server 2005, client connection information is now available. This added functionality provides a means for watching the events unfold in the round-trip of client-to-server execution.
Although SQL Server Profiler does incur some overhead for capturing traces, this overhead is minor. SQL Server Profiler trace files can be saved, exported, shared, and benchmarked. The new Database Tuning Advisor can even use a trace to provide feedback for tuning databases.
Microsoft SQL Server 2005 introduces several enhancements to SQL Server Profiler:
- Rollover trace files. Profiler now can replay one or more collected rollover trace files continuously and in order.
- New extensibility standard. Profiler uses an XML-based definition. This new definition allows Profiler to more easily capture events from other types of servers and programming interfaces.
- Profiling of Analysis Services. Profiler now supports capturing events raised by Analysis Services. An administrator needs to identify performance problems with OLAP queries that are issued by a particular user account. The administrator configures SQL Server Profiler to capture login events and the associated server process identifier (SPID) for that session. The administrator also configures Profiler to capture all Query Begin and Query End events that have the same SPID recorded. Using Start Time, End Time, and Duration, the administrator can determine the queries’ timing.
- Save trace results as XML. Trace results can be saved in an XML format in addition to the standard save formats of ANSI, UNICODE, and OEM.
- Aggregate view. Users can choose an aggregate option and select a key for aggregation. This lets users see a view that shows the column on which the aggregation was performed, along with a count of the number of rows that make up the aggregate value.
- Correlation of trace events to performance monitor counters. Profiler can correlate Performance Monitor counters with SQL or Analysis Services events.
Exportable Showplan and Deadlock Traces
Nothing is worse for a DBA than to have to find missing data or discover why the server is hanging on this transaction. As the DBA starts to investigate the database crime scene, the ability to forensically study a showplan, or to watch a deadlock happen on a trace reply, is critical. SQL Server 2005 introduces enhancements to showplan and deadlock traces that give administrators additional ways to tune database servers. Many times, issues that are blamed on the database are actually due to poorly written store procedures or batch files that create contention. (I’ll spare you the lecture on writing efficient apps.) The DBA is often called to the rescue in cases where a deadlock event seems to be occurring. A deadlock forces a transaction to be rolled back and fail because the underlying row is locked by a simultaneous write activity. In SQL Server 2005, new features are provided for deadlock detection:
- Deadlock occurrences collected through trace events are represented graphically. The graphical representation shows deadlock cycles or chains, providing you with a simpler and more intuitive method for analyzing deadlock occurrences than information collected from the trace flags used by earlier versions of SQL Server.
- Showplan results are saved in an XML format, which can later be loaded for graphical display in Query Editor.
The ability to save showplan results in an XML format provides a number of benefits for performance tuning. Showplans can be saved, transferred to another location, and viewed without the need for an underlying database. Administrators can use an exported showplan to help identify discrepancies between different in-house or remote databases. From a proactive monitoring perspective, administrators can collect baseline data on a server and later compare that data to servers as they grow or as performance characteristics change.
Dynamic Management Views
SQL Server 2005 provides more than 80 new Dynamic Management Views (DMVs). DMVs are a completely new topology for troubleshooting SQL Server database issues. They are divided into groups from the server level down to the database level. Special views are provided for checking on .NET assemblies, SQL Service Broker, security, and others. DMVs include not only current data, but also aggregated historical data.
SQL Server 2005 turns on a default trace, which provides a means for finding out what happened when an error occurs. You can think of the default trace as a black box recorder. DMVs use the default trace. This is where DMVs are really interesting: Administrators can review the default trace after an unplanned incident and see what happened. Moreover, when you first log on to SQL Server, the summary reports found in the main window are created from the historical data.
DMVs are actually database views. As such, they can be found in the Views folder under each user’s Database System database folder. (The prefix for the views is dm_.) DMVs are organized into five general categories. The views are categorized by the environmental factors they report on:
- DMVs with the name dm_exec_* provide information about execution of user modules and connections.
- DMVs following the convention dm_os_* report on memory, locks, and execution scheduling.
- DMVs using the name dm_trans_* provide insight into transactions and isolation.
- DMVs for monitoring disk I/O are named dm_io_*.
- dm_db_* provides database-level data.
A DBA would be remiss not to understand these categories and build custom monitoring tools. Figure 3-6 shows the schema change report found on the Summary tab of SQL Server Management Studio. The Summary tab has 13 interesting and useful reports. Not only do these reports use the default trace and DMVs, but they actually run the Reporting Services APIs to display the data. The report shown in Figure 3-6 is interactive, allowing the user to drill into subareas of the report and probe for more information.
Figure 3-6 SQL Server Dynamic Management View report in SQL Server Management Studio.
When your server acts up and freezes, you can combine the DMVs with another new feature: the Dedicated Administrator Connection (DAC). With the DAC and DMVs, you can find the offending process or job and kill it without restarting the server.
Proactive Monitoring
In moving to a more proactive role for DBAs, organizations can prevent future issues with applications. The DBA can provide insight into hardware configurations, disk and storage needs, and performance calculations for applications. Having a seamless interface between development and administration in an organization improves overall application quality, because DBAs often are more well versed in Transact-SQL than developers and are more comfortable designing database objects. Also, the application process should take into consideration the backup/recovery strategy, should be implemented as part of the overall database management strategy, and should conform to current regulatory and privacy policy issues. Why would backup and recovery be part of an application design process?
SQL Server 2005 has two new database technologies that take advantage of specific database designs. Let’s look at the partitioning feature to see how this works. The new partitioning feature works best when the database objects are designed with a partitioning scheme in mind. Moreover, the queries that peruse the data are more scalable because they more efficiently execute disk reads and file I/O. It’s essential to divide table data into smaller partitions as the database grows. The use of physical file groups across possibly hundreds of disks requires specialized knowledge—which is usually the domain of the administrator. And while the application developer needs to do nothing, the database design needs to have appropriate primary and foreign key relationships that make managing the data over the long term as easy as possible. For database partitioning to work correctly, the data partitioning scheme and functionality must follow the cardinal relationships found in the data, such that moving data in and out of the partitioning system is efficient. The new partitioning and snapshot isolation functionality are covered later in this chapter.
The current breed of tools for tuning SQL Server are the work of Microsoft Research. The DMX team has provided a much-improved Database Tuning Advisor (DTA).
Database Tuning Advisor
In SQL Server 2005, the DTA replaces the Index Tuning Wizard. The DTA improves the quality of tuning recommendations, increases the scalability of tuning operations, and simplifies the user experience. The DTA includes the following improvements to database tuning:
- Time-bound tuning. Provides recommendations for a user-specified amount of time. In earlier versions, a user could specify only one of three tuning modes: fast, medium, or thorough.
- Multidatabase tuning. Tunes from workloads that reference tables in multiple databases. In earlier versions, the wizard tuned tables belonging to only one database.
- Event tuning enhancements. Enhanced event parsing capability allows the wizard to tune events containing table-valued user-defined functions and events referencing temporary tables.
- Selective creation of indexes. Users can selectively create a subset of the indexes recommended by the DTA.
- XML output. In addition to the Transact-SQL scripts and text analysis reports that are the standard output of the DTA, users can generate output in XML format.
- Data partition tuning. Recommends appropriate data partitions based on a workload.
- Scalability enhancements. Enhanced parsing capabilities, statistics creation, query optimization, workload compression, and memory management are designed to increase the scalability of tuning operations. The advisor also uses multiple connections to perform work in parallel on a server where multiple processors are present.
The DTA runs as its own executable outside the SQL Server process space. The executable is called DTAShell.exe. The DTA is focused on the database’s physical aspects. If the physical structures are optimized, the query processor will be that much more efficient. The physical performance structures include clustered indexes, nonclustered indexes, indexed views, and partitions.
Current Activity Window (SQL Server Management Studio)
The Current Activity window in SQL Server Management Studio graphically displays information about
- Current user connections and locks
- Process number, status, locks, and commands that active users are running
- Objects that are locked and the kinds of locks that are present
If you are the system administrator for the database, you can view additional information about a selected process or terminate a selected process. The Current Activity window is limited at the database level. If you want to monitor all the databases found on a server at once in an aggregated manner, you have to use a third-party tool or develop one using the SMO.
Event Notifications and Reactive Monitoring
An event notification is a new kind of database object. It executes in response to various DDL statements and trace events. When a notification executes, it sends an XML-formatted message to a Service Broker service. An event notification is similar to a trigger in that it runs in response to an event. Unlike a trigger, however, an event notification is decoupled from the event source; the event message can be consumed asynchronously by receiving messages from the queue for the service. You can use event notifications to react to database schema changes or any other changes related to database objects. If you take advantage of the Service Broker queuing and delivery support, event notifications can be a powerful ally. Event notifications are worthy of an entire chapter. I recommend that you read about them on Books Online before getting started. It is important, though, to understand the difference between event notifications and triggers. Table 3-1 illustrates the major differences.
Table 3-1 Comparison of Event Notifications and Triggers
Trigger |
Event Notification |
Responds to DML and DDL events. |
Responds to DDL statements and a subset of SQL Server trace events. |
Runs SQL or CLR code. |
Doesn’t run any code—simply sends an XML message to a Service Broker service. However, the service can be designed to activate a stored procedure that processes the message. |
The event must be consumed synchronously. |
The event is consumed asynchronously. |
The event consumer is tightly coupled to the event producer. |
The event consumer is decoupled from the event producer. |
Events must be consumed on the local server. |
Events can be consumed on a remote server. |
Executes in the scope of the enclosing transaction. |
Does not execute in the scope of the enclosing transaction. |
ROLLBACK is supported. |
ROLLBACK is not supported. |
DML trigger names are schema-scoped. |
|
DDL trigger names are database- or server-scoped. |
Event notification names are scoped by server, database, assembly, or a specific object within a database. |
The DML trigger has the same owner as the table for the trigger. |
Event notification on an object may have a different owner than the object that the notification monitors. |
Metadata: |
Metadata: |
select * from sys.triggers |
select * from sys.event_notifications |
select * from sys.server_triggers |
select * from sys.server_event_notifications |
EXECUTE AS is supported. |
EXECUTE AS is not supported for the event notification. |
DDL trigger event information is available as XML through the new EventData() built-in function. |
Sends XML-formatted event information to a Service Broker service. The XML has the same schema as that which is emitted by the EventData() built-in function. |
As you can see, the differences between the two choices are significant. In many cases triggers are the best mechanism. The use of event notifications does have an important place. In organizations with server farms, where changes to system structures need to be coupled with operating system changes, the event notification capabilities can be hooked up to a Windows application via WMI. This is pretty advanced functionality. You should research and understand it well before implementing it.
Database Mail
In SQL Server 2005, a new off-by-default technology has been built on the SQL Service Broker. Database Mail replaces SQL Server 2000’s SQLMail. Database Mail provides mail-sending capabilities that can be invoked via a user interface or a command-line/code interface. It uses SQL Server Service Broker queuing to send messages.
Database Mail is found in the Management folder of the database explorer for each database instance. It is not included in SQL Server Express, so you can’t host a spam server! You configure Database Mail using the Database Mail Configuration Wizard. Figure 3-7 shows this wizard’s start page to give you an idea of the parameters you must supply. Notice that Database Mail uses SMTP mail protocols. Before you can use Database Mail, you have to enable SMTP on the server and change the default security settings in Windows. Accomplish this by either continuing through the wizard or going to the Surface Area Configuration (SAC) tool and turning on Database Mail.
Figure 3-7 Database Mail Configuration Wizard.
Database Mail can be used in a number of scenarios. Here are a couple cases:
- A database administrator has set up a special routine that sends him or her an e-mail when the backup job has successfully or unsuccessfully completed.
- A DBA has set up a routine that sends an e-mail when the Server Processor is at better than 75% utilization.
Database Mail is a send-only program, so it’s ready to receive mail and perform an action. Overall, database mail is much safer and easier to manage than mail capabilities provided in early versions of SQL Server.