T/SQL Tuesday – A Day in the Life

Woke Early Fueled Up Let’s Go

Backstory

This month’s T/SQL Tuesday challenge was from Erin Stellato (blog|twitter) of SQLskills. She asked us to document a given day last week and then craft your post around what happened in order to provide an idea as to what a typical Day in the Life for a SQL Server Professional is like. I expect it to shake out that when looking at the full scope of submissions we will the find expected similarities – but will also conclude that the roles we feel are overwhelmingly homogenous are in fact quite different.

My workplace is quite different from the typical DBA. I don’t have a cubicle. I don’t have a long commute time. I have amazing coffee in the office. I don’t have to hear every detail about how Ken on the Storage Team had

AN AWESOME CHEESEBURGER LAST NIGHT WHEN I WAS OUT BUYING A NEW HAND SAW.

No…

I wake up.

I go downstairs.

I fire up my office.

Grind and brew the perfect cup of coffee (always grind right before you brew).

Remember I forgot to put on pants.

Go back upstairs and put on pants shorts.

Then sit down, check morning reports and email and make yourself valuable to those who put their faith in your skills.

Yes, I’m a telecommuter. Loud and proud in some capacity since 2000.

8:00am:  The Typical Start to an Atypical Day

This day started out as any other day typically does. The first 15 minutes or so involved re-writing a script to automate a log shipping process that I made the mistake of not saving prior to going into hibernate the night before. I’ve been dealing with RAM issues on my laptop – unless you’re a SQL Professional you think 4GB of RAM is sufficient for a DBA’s laptop – and Management Studio decided to shut itself down to recover free memory apparently.
By the time that was done, Reporting Services delivered my Coffee Reports. What are these? These are the reports that are fed from a database I use to keep track of my SQL Server instances and their associated databases, job outcomes, space concerns, backup situations, DBCC CHECKDB statuses, and many metrics the Enterprise DBA needs to keep a handle on dozens of instances and thousands of databases. The Coffee Reports allow me to get an understanding of the emergent things I need to address first thing in the morning.

Things like:

  • Anticipated data file growths to pre-manage.
  • Space concerns on disk.
  • Job failures.
  • DBCC failures.
  • New databases created in the environment.
  • Backup concerns.

 


Do you want to learn more about how to get the most out of your day without working harder?  The Coffee Reports are just a start and will be a part of the sessions I present at  SQL Connections Fall 2012


Once those items are addressed it’s time to move on to whatever project work and customer requests are in the environment.  On this day all these reports were reviewed by 8:30 am.  The standard issues were in check.  I could jump right into taking care of project issues and helping my customers…

8:30am:  Hey!  Email over Here!  Hi!  Oh, Instant Message Over Here… HOWDY!

Nothing gets in the way of getting things accomplished more than interruptions and meetings.  That being said, I should not complain too much as the scale of the distractions I face working remote can be easily moderated by reducing the amount of times I check email.  I don’t have those aforementioned cubicle conversations to shut out.  I can close Outlook and only check it each hour (a tip I highly recommend.)

So, finish with initial check at 8:30am – get through the administrative mumbo and the associated jumbo at 9:30am

9:30am: Ongoing Storage Corruption Issues

Other than an Enterprise Storage failure that we were being informed impacted the Oracle side of our environment far worse than our SQL environment things are light on the customer side of the equation at the moment.  I’ve been trying to carve time out to standardize our backup and maintenance solutions using Ola Hallengren’s code, clean up old logins, make updates to my daily reporting process, tweak our SQL Sentry installs, and all those other things DBAs need to do – but usually have to shelve – in order to assist in the day-to-day effort of keeping data available, efficient, secure, and accurate.

In regards to the corruption in our storage space and it’s effects on the SQL side of life, we had perhaps four issues where DBCC CHECKDB didn’t necessarily report corruption, but it did fail because it could not read a sector on disk in a given database.  All those issues save two were resolved on the first day of the incident being reported – the first morning even.  The remaining two involved taking the time to migrate our SCOM databases which were running on old hardware on an ancient Windows 2003 O/S to a new VM at my insistence – this would bring them into a newer version of SQL Server, on an O/S with proper disk partition alignment, and would accomplish the virtualization tasks I was hoping for some time now to be accomplished.  The other resolved items were simply to forgo a page restore from last good backup and simply restore the last backups for the databases due to the nature of the data being collected.

Our last issue remaining was on a 950GB database for dictation.  Here we were faced with a situation where full backups would not run due to the corruption, but log backups are running fine.  The initial course of action was to schedule and attempt a page restore for the single page that could not be read from disk during DBCC CHECKDB.  This failed.

The next option I proposed was to restore the last good database backup from prior to the bad DBCC CHECKDB results and then apply log shipping to keep it consistent with production.  The initial process to get it current took 11 hours and we’ve been ready to go since then.  It will now just be the matter of executing the downtime to take a tail-of-the-log backup, apply it to the standby database, detach the old production database, recover the standby database and then rename it to match the naming convention of the old (corrupted) database with minimal downtime to the end users.

9:30am – 9:45am: Everything Runs Better with Fresh Ingredients – Particularly the DBA

After checking on this restore process I felt it was a good time to take 15 minutes to visit the vending machine and get a bite to eat.

You can't get this from the work commissary.

 Just because I have the luxury of taking 15 minutes to make a scramble doesn’t mean I don’t fall into the same old traps those office dwellers find themselves in however…

9:45am:  Squeaky Wheels; Spinning Wheels

Breakfast was served with a side of random support issues and troubleshooting: rights in test overwritten after a user requested a refresh of test from production (Well I didn’t want you to do that), diagnostics on the ongoing SAN corruption issues, a user wanting to make schema changes within the GUI in a development database and getting the rejection because we just don’t allow that anymore, and attempting to resolve what I am seeing as persistent slowness associated with overhead of my VPN software. Pretty boring; vanilla work that would make even baseball sound interesting if I delved into it more here.

12:30pm Yay Lunch!

Nope. Need to determine some capacity requirements for a new VM to replace a damaged/compromised SQL Server… and run a DBCC CHECKDB and a DBCC CHECKALLOC. Let’s try this again.

1:00pm: Yay Lunch!

Emergent Meeting. I need to talk = I can’t eat. Let’s try this again…

1:30pm: Yay Lunch!

A request to help out with… ACCESS!
Unfortunately yes. In order to support this which we SQL DBAs built from the ashes of Oracle PL/SQL Pages, received glory, and have now carry like Jacob Marley’s chains.

2:30pm: Yay Lunch!

Finally. Gee, this plate of celery and peanut butter sure doesn’t taste like victory.

2:45pm: Pool Time

In light of all the work done outside of business hours since corruption was discovered – and in anticipation for the work after hours tonight I took some time to do this

(full disclosure back to Management of course or else I’d not be mentioning it here.)

4:30pm: Turn this Mutha Out… or Not

At 6:00pm we would be performing our cut-over to our standby database on our corrupted dictation system and I could call it a day. Being a DBA I’m cautious to a point of paranoia sometimes so I wanted to get in and watch the environment considerably-earlier than I probably need to. Dripping swimsuit and all I watched until just prior to the cut-over when we were told to delay it.

6:30pm: Blog it Out

I don’t blog as often as I would like. Since I had already committed the time I figured it would be good to do so. Falling into the most-dangerous trap of all when it comes to telecommuting I completely blurred the lines between the conventional end of the work day and the start of personal time…

8:30pm: Let the Day Begin, the Day Begin, Let the Day Start

Time for some stress-relief…
Oh, and for those keeping track at home I’m still log shipping and waiting to do that recovery… but that’s a different post.


Do you want to get more SQL Server training with a side of fun, networking, and rejuvenation? Then join me on a SQL Cruise!