SQLAgentMan Contest #1
I originally planned this post to just vent about a vendor doing something evil to their own database hosted on a shared instance of SQL Server I administer. Then I thought that I may be able to make a contest out of this… So, what caused me to come up with this lame stunt to drive traffic to my blog? Well… you tell me!
I just completed an upgrade in our test environment of a commercially-developed product that is used to <REDACTED TO PROTECT THE INNOCENT>. The vendor’s standard process is to run a series of *.sql scripts against their database via a batch file based upon certain parameter values hard-coded into the batch file. I strongly suggest in most circumstances that you do not allow this process in your environment, even with a trusted software vendor. Matter of fact the only time I advocate doing so is when:
- This is a dedicated database environment for only their product.
- You’ve taken a backup (full or differential depending upon size and activity) against every user database on the instance
- You’ve also taken a full backup of your master and msdb databases
- Your resume is current as of this morning.
So, what do I recommend? Simply tell the vendors they can’t update their products, ever. It makes your life far easier and really helps with time management. Ask the vendor to supply you with the scripts so you can either review them ahead of time and verify they’re not doing anything that is an afront to proper database management or their upbringing and a good citizen of the world and then offer to run the scripts manually in-lieu of executing the batch file. If the vendor insists on using the batch file approach, perform your review, confirm that their rights are sufficient (and also assert that most-likely proper rights does not mean System Administrator) and then allow the vendor to execute the batch file.
Why do I advocate this level of micro-management on the database instance?
- I am an ___hole.
- I enjoy taking time out of my schedule to Q/A code that is under an SLA to another entity we’re already paying obscene amounts of cash, as well as 30 pints of fetal panda blood a month to?
- I have a bias against most software vendors I’ve had interaction with.
- Hard coding parametrized security credentials in a batch file is wrong on a scale greater than spandex shorts on a shirtless Abe Vigoda. (Bing “Abe Vigoda” kids if you don’t know who I’m talking about.)
- It saves headaches in the long run.
While the first bullet is correct a fair amount of the time, it is moments like today when you reap the benefits of the additional work put into reviewing code that otherwise would have been run against your RDBMS without oversight by the Database Administrator. While I am not going to tell you right now what the offending line of code I ran across was, I will tell you a little something about what the calls coming in from users would have been like in the next few days. Comment on what you think was the offending line of code and you’ll win a mystery prize of untold worth. I’ll chose a single winner from all correct answers submitted on the blog received by Midnight July 10, 2009. The winner will be notified by email.
Here are the “calls” I could have received:
- “I’m getting a timeout when I try to connect to my database.”
- “My application is running really slow.”
I know, not much to go on, but many of you out there probably already know the SINGLE SQL COMMAND CAUSING THIS. I will not judge you on syntax, never fear.
Remember, this is for a mystery prize of untold worth, delivered to your front door. So, what was I able to successfully avoid running today that would have spelled out nasty performance an periodic timeouts against the specific SQL Server database that was upgraded an hour ago?
Execute sp_changedbowner ?
DBCC SHRINKDATABASE?
DROP DATABASE master;
My guesses:
exec dbo.sp_configure ‘c2 audit mode’, 1
or
exec dbo.sp_configure ‘max degree of parallelism’, 1
DBCC FREEPROCCACHE;
Or better yet:
ALTER DATABASE SET SINGLE_USER WITH ROLLBACK IMMEDIATE;
(Forgetting to set it back to MULTI_USER)
ALTER DATABASE BadVendor SET CHANGE_TRACKING ON
— Brent, I think you are giving
— vendors too much credit 🙂
dbcc indexdefrag([database])
Drop Login ‘sa’;
🙂
I agree with dba4life, you *are* giving the vendors too much credit. In many cases their database depth of knowledge is so shallow they couldn’t spell DBA if you spot them two letters and give them 26 guesses.
sp_configure ‘max degree of parallelism’, 1? Wow.
Alter Database [dbname] SET Auto_close ON with no_wait
Or… Alter database [dbname] Modify File (name=’xyz’, filegrowth = 1024kb)
Tim & DBA4Life – yeah, you’re probably right there. I thought about what I’d do to slow a server down in a subtle way, and those were the first things that came to my own fiendish mind.
Jeremy’s auto_close is awesome. I’d say it’s either that or auto_shrink.
You guys are probably right about the auto_close or auto_shrink. Great answers!
Dang too late to the party. My vote’s for the auto_close command.
BEGIN TRAN on the master database with the Commit off in some piece of code that didn’t get called, racking up the trans logs and slowing everything down to a crawl
I like all of the answers thus far (except MAXDOP 1… A lot of the ISV apps I touch benefit from that because of their horrid code and lack of indexing causing one query to hurt all procs).
My first reaction was auto-close or auto-shrink but some of what Tim said threw me off that trail and have me confused…
Auto Shrink will hurt perf (especialy in SQL 2K and 2k5/8 w/o instant file initialization, but it will still hurt some even with it.. Plus it’s bad for a host of other reasons). I have never experienced a shrink causing corruption but I cringe when I see it scheduled and cringe even more when I see the backup happening after it instead of before (not that I want to even see shrinking happening anyway)
Auto Close –> Not sure if that would need to have backups before hand? I have never enabled it, only disabled it on 3 instances in my 10 years. It was causing strange performance issues intermittently…
I am over thinking this one trying to come up with something better but I guess for now I have to put my money on Auto Shrink.
I like Josef’s BEGIN TRAN as that could definitely cause some issues if done in Master or Tempdb (or even the user database) but I wouldn’t do a bunch of backups before issuing a begin tran (though I would if messing with Master).
I don’t even know if I would backup the system databases before turning a user DB auto shrink on. Or maybe I am misreading Tim on second thought and the backup is referring to letting vendors run “stuff”… Hmm I guess I am sticking with auto-shrink.
Mike, I don’t think I mentioned “corruption” anywhere…
No you didn’t. I was just thinking about reasons to have a backup before doing whatever the command was. I then re-read and realized you were talking about the backups in the context of before letting a vendor script run (if you are going to do so at all).
[…] Tim Ford’s First Contest – he ran a contest asking readers to guess what stunt a vendor tried to pull. He’s since posted the answer, but first read the comments on the contest post. I got a huge laugh out of these. […]
[…] Project managers buy systems without talking to a DBA or an architect first. When the check is written before the requirements are, inevitably the solution doesn’t meet all of the requirements. The systems violate security standards, run as SA, don’t work with current versions of SQL Server, and worse. For examples, check out the comments on Tim Ford’s contest about vendor problems. […]
My choice… as I have seen it done 🙂
WAITFOR DELAY @iDelay