Collecting and Analyzing Historical Data
Capturing a few events using Profiler can be helpful for one-off troubleshooting or tuning efforts. But if analytical applications are mission-critical, you don't want to miss errors, processing or backup failures, and other events that could occur while you're not actively monitoring the server. A couple of options are available for ongoing collection of trace events.
SSAS has a default trace called Flight Recorder that starts as soon as you start Analysis Services service. By default, Flight Recorder doesn't collect many events, and it's recycled every hour or when the trace file reaches 10MB in size. You can find Flight Recorder files in the log directory of your Analysis Services installation. Modify the trace definition to include the events you want to capture, as well as the length of time you want to keep the file before it's recycled. You also can write a simple program to copy trace files to a different location before they're recycled, so that historical events aren't lost. Another alternative is to start a trace with desired events and columns from ascmd.exe. You can easily automate this result by scheduling a nightly job that starts the trace, and including the stop time/file location in the trace definition.
So far, so good. Let's say we have the trace files for the last month; how do we make them useful? Opening each trace file to search for a series of events is tedious. We can save traces into a SQL Server relational database table, but doing so through Profiler's graphical interface is very slowit writes one row at a time. Although you can configure the Profiler interface to write the trace events directly to a table as events occur, this technique can cause severe performance issues on a busy Analysis Server, so that's not really an option.
Do you like reading SQL Server traces using system-supplied functions? Too bad. Microsoft doesn't provide any Transact-SQL functions to read Analysis Services trace. Fortunately, we can convert a trace file into a text file (you cannot save the trace output as text) by using PowerShell; then we import the resulting file into a table. Another option is to write a custom .NET program to read trace-file data into a table. Once trace data is in a SQL Server table, you can aggregate it to find out who was querying or processing Analysis Services objects at any time, who ran the slowest queries, how much data was transferred by synchronization jobs, and so forth.
Summary
SQL Profiler is an essential tool for monitoring and troubleshooting Analysis Services issues. This series has provided an overview of Profiler events and columns, as well as explaining the methodology for monitoring queries, processing, and backup events. I also explained Profiler's replay functionality for previously captured traces, and provided some guidelines for collecting trace data for historical analysis.
To find more information about monitoring, tuning, and troubleshooting Analysis Services, I recommend following the SQL Server Customer Advisory Team (SQLCAT) and reading the Analysis Services 2008 R2 operations guide.