Top 13 SQL Server Mistakes and Missteps – #13 Being a Visible DBA
#13 Being A Visible DBA At The Wrong Time
I’ve given presentations where I state that one of the qualities of being a good DBA is being a visible one. The days of being a Database Administrator that sits in the smallest cubicle next to the noisy refrigerator and generally avoided by the rest of the office is long gone. Today’s DBA needs to get in front of the customers; be involved in the purchasing and architectural decisions. They need to be visible. There is a very important caveat to that however: if things are running smoothly then they should never be visible on the SQL Servers they support. Tracing, maintenance, querying activity should not make a noticeable impact on the end users within the databases and supported applications. If a DBA is doing their job correctly then they should be maintaining or improving their supported systems’ status quo without lumbering around like a college sophomore coming into their dorm room at 3 am on a Friday morning. No, they should be more like a high school senior (who for the sake of this discussion is over the age of 18 and a legal consenting adult) attempting to sneak back into their house after being out partying with the college sophomore on a Friday morning at 3 am.
Administrative work does sometimes cause overhead. I am not going to say that you can be a good DBA and not leave a performance footprint. Every action on a server leaves a mark of some sort. The issue is ensuring that these footprints are small and timed appropriately. This is another reason why it’s so critical to understand the data and business purposes and use of the systems you support. You need to be aware of the low, high, and critical periods of time when these systems are serving out to the customers. You can do this with monitoring software such as SQL Sentry Performance Advisor, Quest Spotlight, or others. You can do this also with the native Windows tools such as Profiler. Among other things, this analysis and these tools will tell you various important counters and activity from an disk I/O, CPU, and memory standpoint. You’ll be able to tell how many connections are active on the server: where they are coming from, what they are doing. What this analysis will not tell you is the level of importance of this data – these connections – to the vitality of the business. That is why you need to have an understanding of the data and it usage. The two of these processes together will provide you with insight into when you can (and when you shouldn’t) perform more-intrusive administrative tasks.
Understanding the details of the administrative processes is also of vital importance. What do I mean by that? Take for example the following Dynamic Management Function (DMF) calls to sys.dm_db_index_physical_stats that will allow you to determine current index fragmentation levels fo the sake of identifying which indexes require rebuilds or index reorganizations:
SELECT * FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, 'detailed'); SELECT * FROM sys.dm_db_index_physical_stats(NULL, NULL, NULL, NULL, 'limited');
The parameters passed into the DMF are, in order:
- database_id
- object_id
- index_id
- partition_id
- scan_type
By leaving the first four parameters as NULL I’ve instructed the engine to scan all databases, all objects, all indexes, and all partitions.
Of course it goes without saying that both queries suck. Why? Well, I’ll be discussing the use of SELECT * later in this series, but you should never use SELECT *. SELECT * is like packing all the clothes you own into ten suitcases for a three-day business trip then burning nine of them once you leave baggage claim. (Doing that kind of shit at the airport may land you in jail for a very long time.) You’ve paid all those excess luggage fees and all the heavy hauling when all you needed was two changes of clothes and three pairs of tighty-whiteys. But I digress…
What I want to illustrate is that all things being equal, there is impact because of a parameter decision. Both queries provide you with enough information you need to make an informed decision on the state of fragmentation, but by using the ‘detailed’ parameter that determines the type of scan to perform on the indexes you’ve chosen to traverse the leaf level of each index rather than just scanning the intermediate levels. The cost between the two can be dramatically different depending on your environment. Obviously timing is also important. You can’t get around performing index fragementation remediation. The critical component is when to run it. This is where your analysis of activity levels and criticality comes to fruition. Scheduling maintenance in the low activity/low criticality intersection is what you want to shoot for. Keep in mind that I am only using index maintenance as an example. This policy holds true for any situation where you as the DBA need to perform administrative tasks or wish to take a proactive glance at the SQL instance.
Next Up: #12 – SELECT *