Baselining queries

Baselining queries

The problem

During the last few months I have done a lot of performance tuning. Being thrown in at the deep end I did my best and boldly changed existing indexes. Our server had problems regarding too much IO done therefore it didn’t seem right to just add one new index after another. However, doing this for the first time I happened to change an index which made the one query perform better but another query I have tuned a few weeks ago actually perform worse. That seemed to be a typical beginner’s mistake but I looked for a way to avoid this in the future and came up with the following solution:

My solution

I store all of the queries I have tuned alongside with their baseline reads. Then I am running these queries daily in a scheduled job, log the performance results to a table and am finally able to see if some of these tuned queries changed for the worse. In the end I produce a daily report like this…

query-results-report

This works best on a preproduction server where data isn’t changing that fast as in production as the number of rows returned and the plans would be more comparable. That’s the reason why I am measuring the size of the result set as well. In the end you can also run this on your production machine but you could have to do slightly more work reviewing the results.

The solution explained

I start with two tables. The first table “Query” is used to store the queries to be measured along with their baseline values. The second table “QueryTestResult” stores the results of the different query runs. I prefer to have a separate database where I can store my DBA tables but you are free to put these tables anywhere you like. I wrote a view on top of these query to facilitate my report…but that’s just kind of best practice and not compulsory. I leave this view having german column names in the sample code due to my laziness. Please apply the following script to create these tables.

tables-for-baselining-queries

For the performance test itself I have written a stored procedure. This consists of three simple steps:

  1. Collect the queries to be performance tested
  2. Iterate through them using a cursor
    1. Execute the query with sp_executesql
    2. Gather Query Performance statistics from sys.dm_exec_query_stats and write them to the table “QueryTestResult”. In order to get the results of “my query” I filter on the sqltext and the last_execution_time.

At first I thought it would be quite difficult to filter the result of just the tested query and considered adding a unique identifier into the query text. However that would have led to a new query plan filling up the plan cache which is not that sexy. After filtering on last_execution_time this actually was a non-issue as even the production server is not that busy that the same query is executed tenfold in such a short amount of time.

pr_testqueryperformance

Last but not least I needed a way to execute the performance test regularly. This was a simple task of just adding a SQL Server Agent job which executes the procedure. Here I have got a script to generate such a job for your convenience.

agent-job-for-performance-testing

Based on the view you should be able to set up reporting/notifications. I did this using a report in Cubeware C8 Cockpit. Of course you could simply send out an e-mail from the Agent job or use an SSRS report. For me being originally a BI specialist only the reporting portion is not much of an issue. But I could believe that there are more classic DBAs who aren’t creating reports that often. If you want me to provide a SSRS report please let me know.

My experience

For me that’s a solution that works quite well. At the moment I gladly look at perhaps one or more queries that I would have to in order to be sure I didn’t break anything already tuned by changing indexes. One lesson learned (apart from that routine) was to introduce a preproduction server for testing performance tuning first as the Dev and Test Environments don’t have any decent workload for performance tuning.

I am sure there are a lot of things which could be improved in this procedure. For example I have been thinking of adding the query hash and plan hash and compare the execution plans based on that information. However, I haven’t tried if this will really work in the end thus at the moment I do the comparison based on reads. Having a dynamic query where one day there are 10 rows returned and the other day 250 rows of course this makes it more difficult to apply any simple logic. However it’s just good enough for me to look at the metrics…have a short sneak peak into the execution plan captured and decide, that that’s good enough for the query to go.

What do you think, dear reader? I am looking forward to get comments on this blogpost :-).

 

 

 

 

 

 

 

 

2 thoughts on “Baselining queries

  1. Hi Shane,

    Thank you. I was pretty surprised that you mentioned German: I thought I had thorughly translated everything into english…but no there are some german “remains”. Nevertheless I hope the column names are mostly self explanatory.

    All the best
    Martin

Leave a Reply to admin Cancel reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.