Dynamic Management Procedures – Log Metrics
Yes, Dynamic Management PROCEDURES (DMPs)
… before you even ask…
No, they’re not MS_Shipped = 1.
The Dynamic Management Objects (Dynamic Management Views & Dynamic Management Functions) in Microsoft SQL Server provide a wealth of metafatastical metdata. However, they tread lightly in many areas. They don’t stray too close to the INFORMATION_SCHEMA views. Likewise those views – all about objects and properties – do not provide much, if any, performance-related information. The DMOs also don’t provide much in the way of information about any object outside the confines of the master database (as in SQL Agent jobs and backup information). For the past seven years I’ve been focused on what there is to be mined from the DMOs and have longed for more. Instead of waiting for Microsoft I think it’s time to create a few of my own. As I don’t want to tread on hallowed ground and cause confusion I’m going to steer clear of the use of views and functions (plus I will be relying on temp tables in the scripts and therefore that rules out views immediately.)
I do want to approach the naming convention used in the standard DMOs. Therefore here in this post and in posts to come in this series over time I will be using the naming convention of dbo.sp_ford_dm_*, where * will be a logical name for the results of the stored procedure call. You’ll see that like the DMOs I’ll be using the dm_* conventions for naming objects. At the same time I want to ensure that if Microsoft does have any plans in the works (and I’m not saying they are by any means) then adding the _ford_ to the naming should as well avoid any issues.
Before anyone says it I’ll raise the spectre first: I am creating these using sp_ for the prefix and I am creating these in the master database. Put down the flaming torches and troll masks. Many of these DMPs (Dynamic Management Procedures) will be tapping in to system catalog views across multiple databases as well as scanning the user databases. It’s the logical choice.
dbo.sp_ford_dm_log_metrics
As part of my series on SQL Server Administration Mistakes and Missteps I discussed improper autogrowth settings and touched upon the nature of how this affects the VLFs (Virtual Log Files) that make up the transaction log proper in SQL Server. What I didn’t do, however, is provide you with any code to identify the VLF count and other relevant metrics surrounding the transaction log. Well that’s about to change.
This DMP will provide you with enough information to ascertain:
- The physical location of each database’s transaction log.
- The unfortunate situation where you have multiple transaction logs for a single database.
- VLF (Virtual Log File) count.
- Recovery Model for each database.
- Current log consumption in megabytes.
- Log size in megabytes.
- When running under BULK LOGGED or FULL recovery, the largest transaction log backup in megabytes in your backup history.
- Inadequate flushing/purging of backup history (which can affect backup performance.)
- Situations where your physical log size in excessive or inadequate based upon your backup frequency, logged activity, and recovery model.
- Whether you’re using a poor auto growth setting (ahem) “percent value.”
- Autogrowth units.
Below is the DDL command for creating the dbo.sp_ford_dm_log_metrics DMP. Results are returned based upon highest VLF count.
Since schemas are prone to change from version to version of Microsoft SQL Server the DMP code needs to take into consideration which version of SQL Server you’re running against as the diligent and handsome James Lean mentioned below. (Which prompted extra work on my part to update this post.) Nice catch Mr. Lean!
SQL Server Versions 2005 – 2008R2
USE master; GO CREATE PROCEDURE dbo.sp_ford_dm_log_metrics AS --======================================================= -- FORDDMPs (Dynamic Management Procedures) -- dbo.sp_ford_dm_log_metrics -- Created by Tim Ford, SQL Server MVP -- thesqlagentman.com, sqlcruise.com, -- @sqlagentman on Twitter -- Use permitted with full attribution to creator -- Present Log File Properties Ordered by VLF Count DESC -- *** VALID ONLY FOR SQL SERVER VERSIONS 2005 - 2008R2 *** --======================================================= CREATE TABLE #DBCC_LogInfo ( [FileID] INT , [FileSize] BIGINT , [StartOffset] BIGINT , [FSeqNo] BIGINT , [STATUS] BIGINT , [Parity] BIGINT , [CreateLSN] NUMERIC(38) ); CREATE TABLE #VLF_Counts ( [database_name] sysname, [file_id] INT, [vlf_count] INT ); CREATE TABLE #Log_Space ( [physical_name] NVARCHAR(260), [SpaceUsed_mb] BIGINT ); CREATE TABLE #Log_Backup_History ( [physical_name] NVARCHAR(260), [days_in_history] INT, [max_backup_size_mb] BIGINT ); INSERT INTO #Log_Backup_History ( [physical_name] , [days_in_history] , [max_backup_size_mb] ) SELECT MF.[physical_name] , DATEDIFF(d, MIN(BS.[backup_finish_date]), GETDATE()) AS [days_metadata], MAX(CEILING(BF.[backup_size]/1024/1024)) AS [max_backup_file_size_mb] FROM sys.databases D INNER JOIN sys.master_files MF ON D.database_id = MF.database_id LEFT JOIN msdb.dbo.[backupset] BS ON D.[name] = BS.[database_name] LEFT JOIN msdb.dbo.[backupfile] BF ON [BF].[backup_set_id] = [BS].[backup_set_id] WHERE BS.[TYPE] = 'L' AND MF.[type_desc] = 'LOG' AND D.[recovery_model_desc] != 'SIMPLE' GROUP BY MF.[physical_name]; EXEC sp_MSforeachdb N'USE [?]; INSERT INTO #DBCC_LogInfo ([FileID] , [FileSize] , [StartOffset] , [FSeqNo] , [Status] , [Parity] , [CreateLSN]) EXECUTE sp_executesql N''DBCC LogInfo([?])''; INSERT INTO #VLF_Counts SELECT DB_Name(), [FileID], COUNT([FileID]) FROM #DBCC_LogInfo GROUP BY [FileID]; TRUNCATE TABLE #DBCC_LogInfo; INSERT INTO #Log_Space ([physical_name], [SpaceUsed_mb]) SELECT [physical_name], CAST(FILEPROPERTY([name], ''SpaceUsed'') as int) * 8/1024 AS [SpaceUsed_mb] FROM sys.database_files WHERE type_desc = ''LOG'';' SELECT V.database_name , MF.[name] AS [logical_file_name] , MF.[physical_name] AS [physical_file_name] , V.[vlf_count] , D.[recovery_model_desc] AS [recovery_model] , L.[SpaceUsed_mb] AS [log_used_mb] , MF.[SIZE]*8/1024 AS [log_size_mb] , LH.[max_backup_size_mb] , CASE D.[recovery_model_desc] WHEN 'SIMPLE' THEN NULL ELSE (MF.size*8/1024) - ISNULL(LH.[max_backup_size_mb], 0) END AS [file_excess_mb] , MF.[is_percent_growth] AS [is_pct_growth] , CASE MF.[is_percent_growth] WHEN 1 THEN CAST(MF.[growth] AS VARCHAR(3)) + '%' WHEN 0 THEN CAST(MF.[growth]*8/1024 AS VARCHAR(15)) + 'mb' END AS [growth_units] , LH.[days_in_history] AS [backup_days_in_sample] FROM #VLF_Counts V INNER JOIN sys.databases D ON V.[database_name] = D.[name] INNER JOIN sys.master_files MF ON D.[database_id] = MF.[database_id] AND V.[file_id] = MF.[file_id] INNER JOIN #Log_Space L ON MF.[physical_name] = L.[physical_name] LEFT JOIN #Log_Backup_History LH ON MF.physical_name = LH.physical_name WHERE MF.[type_desc] = 'LOG' ORDER BY V.[vlf_count] DESC; --============================================== -- Cleanup --============================================== IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#DBCC_LogInfo') DROP TABLE #DBCC_LogInfo; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#VLF_Counts%') DROP TABLE #VLF_Counts; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#Log_Space%') DROP TABLE #Log_Space; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#Log_Backup_History%') DROP TABLE #Log_Backup_History; GO
SQL Server Version 2012
USE master; GO CREATE PROCEDURE dbo.sp_ford_dm_log_metrics AS --======================================================= -- FORDDMPs (Dynamic Management Procedures) -- dbo.sp_ford_dm_log_metrics -- Created by Tim Ford, SQL Server MVP -- thesqlagentman.com, sqlcruise.com, -- @sqlagentman on Twitter -- Use permitted with full attribution to creator -- Present Log File Properties Ordered by VLF Count DESC -- -- *** THIS VERSION VALID FOR 2012 SQL SERVER ONLY *** -- --======================================================= CREATE TABLE #DBCC_LogInfo ( [RecoveryUnitId] INT, [FileID] INT, [FileSize] BIGINT, [StartOffset] BIGINT, [FSeqNo] BIGINT, [STATUS] BIGINT, [Parity] BIGINT, [CreateLSN] NUMERIC(38) ); CREATE TABLE #VLF_Counts ( [database_name] sysname, [file_id] INT, [vlf_count] INT ); CREATE TABLE #Log_Space ( [physical_name] NVARCHAR(260), [SpaceUsed_mb] BIGINT ); CREATE TABLE #Log_Backup_History ( [physical_name] NVARCHAR(260), [days_in_history] INT, [max_backup_size_mb] BIGINT ); INSERT INTO #Log_Backup_History ( [physical_name] , [days_in_history] , [max_backup_size_mb] ) SELECT MF.[physical_name] , DATEDIFF(d, MIN(BS.[backup_finish_date]), GETDATE()) AS [days_metadata] , MAX(CEILING(BF.[backup_size]/1024/1024)) AS [max_backup_file_size_mb] FROM sys.databases D INNER JOIN sys.master_files MF ON D.database_id = MF.database_id LEFT JOIN msdb.dbo.[backupset] BS ON D.[name] = BS.[database_name] LEFT JOIN msdb.dbo.[backupfile] BF ON [BF].[backup_set_id] = [BS].[backup_set_id] WHERE BS.[TYPE] = 'L' AND MF.[type_desc] = 'LOG' AND D.[recovery_model_desc] != 'SIMPLE' GROUP BY MF.[physical_name]; EXEC sp_MSforeachdb N'USE [?]; INSERT INTO #DBCC_LogInfo ([RecoveryUnitId] , [FileID] , [FileSize] , [StartOffset] , [FSeqNo] , [Status] , [Parity] , [CreateLSN] ) EXECUTE sp_executesql N''DBCC LogInfo([?])''; INSERT INTO #VLF_Counts SELECT DB_Name(), [FileID], COUNT([FileID]) FROM #DBCC_LogInfo GROUP BY [FileID]; TRUNCATE TABLE #DBCC_LogInfo; INSERT INTO #Log_Space ([physical_name], [SpaceUsed_mb]) SELECT [physical_name], CAST(FILEPROPERTY([name], ''SpaceUsed'') as int) * 8/1024 AS [SpaceUsed_mb] FROM sys.database_files WHERE type_desc = ''LOG'';' SELECT V.database_name , MF.[name] AS [logical_file_name] , MF.[physical_name] AS [physical_file_name] , V.[vlf_count] , D.[recovery_model_desc] AS [recovery_model] , L.[SpaceUsed_mb] AS [log_used_mb] , MF.[SIZE]*8/1024 AS [log_size_mb] , LH.[max_backup_size_mb] , CASE D.[recovery_model_desc] WHEN 'SIMPLE' THEN NULL ELSE (MF.size*8/1024) - ISNULL(LH.[max_backup_size_mb], 0) END AS [file_excess_mb] , MF.[is_percent_growth] AS [is_pct_growth] , CASE MF.[is_percent_growth] WHEN 1 THEN CAST(MF.[growth] AS VARCHAR(3)) + '%' WHEN 0 THEN CAST(MF.[growth]*8/1024 AS VARCHAR(15)) + 'mb' END AS [growth_units] , LH.[days_in_history] AS [backup_days_in_sample] FROM #VLF_Counts V INNER JOIN sys.databases D ON V.[database_name] = D.[name] INNER JOIN sys.master_files MF ON D.[database_id] = MF.[database_id] AND V.[file_id] = MF.[file_id] INNER JOIN #Log_Space L ON MF.[physical_name] = L.[physical_name] LEFT JOIN #Log_Backup_History LH ON MF.physical_name = LH.physical_name WHERE MF.[type_desc] = 'LOG' ORDER BY V.[vlf_count] DESC; --============================================== -- Cleanup --============================================== IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#DBCC_LogInfo') DROP TABLE #DBCC_LogInfo; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#VLF_Counts%') DROP TABLE #VLF_Counts; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#Log_Space%') DROP TABLE #Log_Space; IF EXISTS (SELECT [name] FROM tempdb.sys.[TABLES] WHERE [name] LIKE '#Log_Backup_History%') DROP TABLE #Log_Backup_History; GO
What the Results Look Like
Calling the DMP is as you would expect. This currently requires no parameters though I expect future upgrades to allow for dynamic sorting and filtering of databases.
EXECUTE dbo.sp_ford_dm_log_metrics;
On a “bad” instance your results may look as such. (Mine did on this nasty Test SQL Server I just “discovered” recently):
I’ve changed the names in this case to protect the blatantly guilty (and this DBA as well.) As you can see there are issues from multiple angles: high VLF count, poor auto-growth choices, and auto-growth as a percentage. I also can tell from looking at this that the logs are not being backed up and that we have test databases running under FULL recovery with no associated log backups occuring. Based upon this information I was able to correct most issues. You’ll notice I still need to deal with a couple situations where we have databases with large transaction logs and only a few VLFs as well as that tempdb issue (but it’s a test instance and I have better things to concern myself with):
(Note that the max_backup_size_mb and file_execess_mb columns will report NULL values when running under SIMPLE recovery for logical reasons.)
The following results are against one of my healthier instances (again, pertinent information changed to protect THIS GUY shows moderate VLF counts and no percentage growth settings. The auto-growth sizes are fitting for the usage patterns on this server as well. In this case you can see I’ve also performed some “surgery” on these log files by the fact that they are reporting negative file_execess_mb values. This will occur where historically the database’s log encountered a large, aberrant, activity period that was not indicative of normal load. After correcting the physical log file for this situation msdb.dbo.backupfile still retains that information until it falls outside my purging process for backups, which is 30 days. One thing I was able to determine in the course of writing this post was that the purge script was not installed on this SQL instance. That’s since been corrected. You learn something every day!
What’s Next
Expect to see future posts on Dynamic Management Procedures covering backup history, compression, SQL Agent jobs, and the like. Stay tuned. This is going to be fun. It’s not every day you get to build your own Dynamic Management “Objects”.
Great script Tim, looking forward to seeing what else you have in store!
Unfortunately it doesn’t work on SQL 2012 as DBCC LOGINFO returns an extra column RecoveryUnitId, so the insert into the temp table fails.
Nice catch James! I’ve updated the post to include different flavors of SQL versions.
“diligent and handsome” – I’ll take that, thanks! 😉
Rather than maintaining two separate scripts, if you create the temp table using the 2012 version (including the extra column) you can then just add the following straight after the CREATE TABLE statement, which drops the column for pre-2012 versions:
IF @@MICROSOFTVERSION / 0x01000000 < 11
ALTER TABLE #DBCC_LogInfo
DROP COLUMN [RecoveryUnitId];
The single script should then work for all current versions.
I weighed this idea and the separate script by version option and went with the same approach that Microsoft uses. I had originally re-worked the script to create a different temp table based upon version but I really like the simplicity of what you propose here. Dropping the unnecessary column if version < 11 is really simple and clean. Looking back on my decision I may re-work this and repost with changes. Thanks James.
Great script!
thanks!