Parallelism, SQL CLR, and Performance Tuning

SQL Server is a broad platform.  Every release we that work with SQL Server find ourselves needing to pick and choose which aspects we intend to invest time in learning thoroughly and which we intend to spend only sparse bits and pieces of our life becoming acquainted.  We even need to decide which aspects we are going to ignore.  Some aspects are battlegrounds where ignorance, lack of interest, and fervent devotion fight it out (PowerShell).  Others lay in waste in our wake as we collectively march forward towards Denali (Notification Services). 

For me, SQL CLR was one of those periphery aspects of Microsoft SQL Server that I have tended to ignore since it’s integration into SQL Server 2005.  I remember many conversations with peers that always tended to fall back into the Why Would I Want Code In My Database? 

I can’t tell you how many times I have either seen installation documentation for products I’ve integrated into organizations that require Microsoft SQL Server or have had conversations with Technical Representatives of companies whose products we use that have blatantly expressed that we NOT touch any of the advanced settings on the SQL Server instances that host their databases: including parallelism.  What they basically mean is “We know what we’re doing.  Microsoft has these values set to what they are for a reason.  No touchie.  You don’t know SQL Server like we do.”


I’ve worked with the products these companies produce for over  dozen years now and the simple truth is that they may know what their product does and how it’s built (note I didn’t say that it runs effectively and efficiently) they usually have no clue about the way their applications make use of the database layer, what the limitations, strengths, and intricacies of the database platform are, or how to utilize them and tune them.  Properly.  Default values exist because you need to have a value.  There is no such thing as a NULL default value.  A default exists to cover (perhaps) the majority of the standard for a given situaton.  It’s not to be confused with a Best Practice.

In the days of single or dual core processors parallelism was not an issue like it is now with the plethora of virtual CPUs we find inside our instances: physical or virtual instances.  many of the application supported by SQL 2000 were simply upgraded to SQL 2005 or 2008 without taking any consideration into the advancements in hardware that came with the moves to newer hardware with multi-core technologies.  CXPACKET and LATCH_EX waits are in abundance, and the first thing that DBAs want to jump in and do is alter the Max Degrees of Parallelism or Cost Threshold for Parallelism setting.  I know enough to know that this is a global panacea for parallelism issues; a hatchet and saw approach to a precision surgery need. 

I know a significant amount of performance tuning techniques, strategies, and practices and  I employ them.  I know how to read execution plans and diagnose pain points.  I know how to use the Dynamic Management Views to diagnose the ills of any SQL Server that gets in my way.  I can also say that this is still not enough knowledge.  Just yesterday I had a conversation with a coworker regarding the perceived inappropriateness of blocking out time on my Outlook calendar daily to allow for reading blogs, newsletters, and continued technical training; that this was sending a bad impression to some staff.  How does one expect to succeed in their technical careers without constant and consistent knowledge-gathering?  You can never know enough.  I think ultimately this is what separates those that succeed in their careers from those that only exist in their jobs. 


Perhaps more appropriately:  the craving for Knowledge.

That is why, when I saw that Adam Machanic was offering a two-day session in New York City on CLR and parallelism topics I jumped all over it.  I had wanted to sit in on his parallelism pre-con at the PASS Community Summit in 2010 but had scheduling conflicts that prevented me from doing so.  Now I get a second chance.  Adam has refined his content from then, removed those aspects he felt were not as compelling and added content to those areas that were of more value.  He always focuses on performance in his presentations – and does a fine job in each presentation I’ve seen.  I’m going to New York – taking time away from work and family and dealing with the humiliating and uncomfortable experience that is air travel since 2001 – because I know that I’ll get an amazing amount of knowledge out of this training and that I have identifiable production application needs for the performance tuning techniques that Adam will be presenting in this session.

That is why even as someone who provides training opportunities in his own right I’m going to be sitting on the other side of the podium for two days.  Because there is always more to learn from those who have chosen their path through a technology (SQL Server or otherwise) differently from your path.  The information for Adam’s session can be found here.  Hope to see you in New York next month.