31 SQL Server Tips That Can Save Your Butt
Want to get the most out of SQL Server 2000? Here are 31 short-and-sweet tips to make your life just a little bit easier.
Tips 1-10
Full Text Search Needs a Single Unique Index
You can set up a full-text index using system-stored procedures (including sp_fulltext_database, sp_fulltext_table, and so on), or through the SQL Enterprise Manager. To create a full-text index on a table using Enterprise Manager, you select Full-Text Indexing from the Tools menu. This loads the Full-Text Wizard. The wizard leads you through the steps for enabling a full-text index on the table.
It is important to note that full-text search needs a single unique index on a table to work. A composite primary key consisting of two or more columns will not work. If you have tables that do not have a single column unique index, you need to add a new, unique column to the table. Identity columns work well for this. If the table has more than one unique index, use the smallest, most narrow available index to get the best performance. For example, if you had a table with a unique GUID column and a unique integer 4 bytes each, instead of the GUID at 16 bytes each.
Handling Special Tags Used Inside XML Templates
XML templates are handy. You don't want to use URL queries when you have many lines of T-SQL code to execute. However, you do need to be smart about handling the special tags used inside these templates. Enclose your sql:query statements and sql:param values between <![CDATA[ and ]]> (known as a CDATA section) to avoid having to manually encode any special characters.
This makes things much easier because it instructs the parser to treat comparison characters such as < as less-than and not as an indicator of the start of an XML element. For reference, the special characters you need to encode outside of a CDATA section are <, >, &, ', and ". Convert them to the strings (known as entities) <, >, &, ', and " when you need to use them as element or attribute values.
XML Parsers Are Case Sensitive
One of SQL Server 2000's major enhancements is the inclusion of native XML support, enabling developers to execute queries that return results as XML-formatted data, rather than standard rowsets. One importamt rule to keep in mind before delving into this much-awaited feature is that XML parsers are case sensitive with respect to element and attribute names. When running your XML data through a parser after you get it back from a SQL Server 2000 query, be sure to remember that in the mind of the parser, <Myelement> is not the same as <myelement>, nor is Myattribute the same as myattribute.
Viewing Locking Activity with SQL Enterprise Manager
Although locking provides isolation for transactions and helps ensure their integrity, it can also have a significant impact on system performance. Keep transactions as short, concise, and non-interfering as possible. One of your goals should be to define transactions to minimize locking performance problems.
As you view and monitor locking behavior, sometimes you need to see output more directly, or you don't like the way the information is presented. Use SQL Server Enterprise Manager to display locking information. To see the output from the Enterprise Manager, expand the server items, expand the Management folder, expand the Current Activity item, and click on either Locks/Process ID or Locks/Object to display the locking information in SQL Server.
To see more information when viewing the lock activity in Enterprise Manager, be sure to go to the EM View menu and choose the Detail option. EM will then display detailed information about the locks, beyond just the process ID or object name. The information displayed includes the lock type, lock mode, lock status, and index involved.
Choosing the Optimal Method to Defragment Data and Indexes
There several methods you can use when you need to defragment your data and indexes. One method available is the DBCC INDEXDEFRAG command. DBCC INDEXDEFRAG eliminates the internal defragmentation in an index, but does not hold locks long term while it runs and doesn't lock the entire table. As a result, it can be run online and will not block concurrently-running queries or updates.
Alternatively, you can rebuild indexes by manually running a series of DROP INDEX and CREATE INDEX commands. However, this can be a tedious process that runs the risk of an index not getting rebuilt if it's missing from the SQL script. Also, if you run out of space while rebuilding an index, the CREATE INDEX command fails leaving you without that index on the table.
A better way of rebuilding all indexes is to use the DBCC DBREINDEX command. Using DBCC DBREINDEX keeps you from having to specify all the indexes to drop and re-create on a table (if you specify just the table name, it automatically rebuilds all indexes). In addition, if DBCC DBREINDEX fails while processing for some reason (out of space, out of locks, and so on), the rebuild is rolled back and the original indexes are left in place.
Using AWE with SQL Server 2000 to Allocate More Memory
When running the Enterprise Edition of SQL Server 2000 on either the Windows 2000 Advanced Server or Windows 2000 Datacenter Server platforms, you can allocate more than the default maximum of 4GB of memory by enabling the Windows 2000 Address Windowing Extensions (AWE) API. When this option is enabled, a SQL Server instance can then access up to 8GB of physical memory on Advanced Server and up to 64GB on Datacenter Servers.
Although standard 32-bit addressing supports up to only 4GB of physical memory, the AWE API allows the additional memory to be acquired as nonpaged memory. The memory manager can then dynamically map views of the nonpaged memory into the 32-bit address space.
You must be careful when using this extension because nonpaged memory cannot be swapped out. SQL Server allocates the entire chunk requested and does not release it back to the operating system until SQL Server is shut down. Other applications or other instances of SQL Server running on the same machine might not be able to get the memory they need.
Also keep in mind that when using AWE with SQL Server 2000, SQL Server can no longer dynamically allocate RAM. By default, it grabs all available memory, leaving only 128MB available for Windows and other applications. You also need to configure the max server memory option to limit the amount of memory that SQL Server allocates. Be sure to leave enough memory for Windows and any other applications running on the server, usually at least 500MB.
Cascading Referential Integrity: A New Feature
SQL Server 2000 added a new feature that allows you to define cascading actions on your foreign key constraint. When defining the constraints on a table, you can use the ON UPDATE CASCADE or the ON DELETE CASCADE clauses, which cause changes to the primary key of a table to cascade to the related foreign key tables.
To illustrate the usefulness of this, consider a pair of related tables: employee and department. The employee table has a foreign key on the dept column that references the dept column of the department table. If it was created with ON UPDATE CASCADE, any changes to the dept column in the department table would cascade to the employee table. Therefore, if dept 20 has 5,000 employees, and you change the dept number to 200 to comply with a business rule, all 5,000 employee records are automatically updated as well. If ON DELETE CASCADE were specified for the table, then deleting dept 20 would result in the deletion of all 5,000 employees!
This is a powerful and dangerous feature. If you plan to utilize the cascade feature,you should work closely with the application developers to ensure that checks and balances are in place to prevent accidental deletion of data. It is also important to note the potential overhead generated by Cascading Referential Integrity.
Using the inserted and deleted Tables For Testing Purposes
In most trigger situations, you need to know what changes were made as part of the data modification. You can find this information in the inserted and deleted tables. For the AFTER trigger, these tables are actually views of the rows in the transaction log that were modified by the statement. With the new INSTEAD OF trigger, the inserted and deleted tables are actually temporary tables that are created on-the-fly. The tables have identical column structures and names to the tables that were modified.
To be able to see the contents of these tables for testing purposes, create a copy of the table, and then create a trigger on that copy. You can perform data modification statements and view the contents of these tables without the modification actually taking place. Use the following listing to create a copy of the tables and then create a trigger on the copy.
--Create a copy of the titles table in the Pubs database SELECT * INTO titles_copy FROM titles GO --add an AFTER trigger to this table for testing purposes CREATE TRIGGER tc_tr ON titles_copy FOR INSERT, UPDATE, DELETE AS PRINT 'Inserted:' SELECT title_id, type, price FROM inserted PRINT 'Deleted:' SELECT title_id, type, price FROM deleted ROLLBACK TRANSACTION
Clearing the syscacheobjects Table
Because query plans in SQL Server 2000 are re-entrant, it's typical that no more than one copy of an execution plan for a stored procedure is in cache memory. However, sometimes multiple query plans can be created and exist in procedure cache at the same time. One of the more likely causes is when users run the same procedure with different settings for specific session options.
How does SQL Server know what plans are currently in memory and what settings were in effect when they were created? This information is contained in the syscacheobjects table in the master database. syscacheobjects keeps track of all the currently compiled plans in the procedure cache.
A large number of entries can exist in the syscacheobjects table. To clear the procedure cache buffers, and subsequently, the syscacheobjects table, you can issue the DBCC FREEPROCCACHE procedure, which removes all cached plans from memory.
Alternatively, you can use the undocumented command, DBCC FLUSHPROCINDB(dbid), to flush all procedure query plans for the specified database from memory. Needless to say, you shouldn't execute these commands in a production environment because they can impact the performance of the production applications running at the time.
Protect Source Code of Stored Procedures Using the WITH ENCRYPTION Option
To protect the source code of your stored procedures and keep its contents from prying eyes, you can create a procedure using the WITH ENCRYPTION option. When this option is specified, the source code stored in the syscomments table is encrypted. If you use encryption when creating your stored procedures, be aware that while SQL Server can internally decrypt the source code, no mechanisms exist for the user or for any of the end user tools to decrypt the stored procedure text for display or editing.
With this in mind, make sure that you store a copy of the source code for those procedures in a file in case you need to edit or re-create them. Also, if you use the WITH ENCRYPTION option, you can no longer use the Transact-SQL Debugger on the encrypted stored procedure. Don't use the WITH ENCRYPTION option unless you have a good reason to hide the stored procedure code.