Just shy of three weeks ago I ranted-on about issues surrounding a third-party application running against SQL that we had to ship-off to the vendor for execution of a stored procedure we could not successfully complete in our production environment. That post, Just How Long is Too Long, has some even more developments that make the initial issues seem utterly sensical. The crux of the initial post was that it took the vendor two weeks to run this stored procedure. The crux of this one: it was all wronged-up and den’ some.
Two weeks and we were finally instructed that the database was zipped-up and ready for us to download. After about a half dozen attempts at restoring or restoring just the header of the backup file and I finally asked a question that I knew the answer to, but still needed to hear with my own ears…
Queasy DBA: “Did you by chance upgrade the database to SQL 2008 from SQL 2005?”
Vendor Support Contact: “Yes”
Queasy DBA: “Why?”
My New Nominee for Worst Person on Earth: “I am only doing what <REDACTED, NAME OF VENDOR PRESIDENT> told me to do.”
Yes, they took it upon themselves to upgrade the database to a version of SQL Server that is not consistent with where this database is being hosted, and we have no shared SQL 2008 instance we can port this db to. You are also unable to restore a SQL 2008 DB back to a SQL 2005 instance. (For the record you can not restore a SQL 2008 DB running in 9.0 compatibility mode on a SQL 2005 instance either.) They gave us back a brick.
After the requisite apologies and excuses the support staff kicked off the process once again on a restored copy of our backup usinig SQL 2005 in their environment. This time it took only three days; ran over a weekend. This leads me to believe that the initial process was not truly run over the two week period, though to this day they will not admit otherwise. They posted a zip file to the FTP server for us to download and we were able to unzip it and reinstall it. This leads to the next, and still currently unresolved issue – we were unable to connect to the database from the application. The vendor support staff, once again citing that he was only doing what he was told to do by the President of the company, had upgraded the database to the most-current version of their product, thus rendering the current application unusable for our company. We’re now thrust into a situation where we need to rush through an application upgrade in order to use the application.
Does anyone know what the opposite of that over-used word, Awesome, is? I ask, because this final issue is so full of Anti-Awesome that it may require its own new term. The initial reason for engaging the vendor was that this product causes massive, prolonged I/O and CPU issues for one of our shared SQL nodes. The vendor has determined that they can not properly fix the database so they have instructed our application team to have the DBA (moi) restore two copies of the database. The application will be used to delete (not truncate) delete all data in one of the databases so it can be repopulated and used for production. They want me to leave the other database up and intact to use for archive. Because of their lack of knowledge on how to solve the issue, or lack of willingness to invest the time to do so we now need to incur additional administration time, additional storage space, and additional backup tape and disk space (as well as backup system capacity) to maintain two shizzy databases instead of one.
The good news (for me) is that this is all going down while I am at Devlink in Nashville. The bad news is that it even needs to be done at all. It’s quite a shame too, the users tell me this application is fantastic and handles all their needs with ease in spite of the problems the DBA is shouldering. However, at some point an organization needs to ask “just how much is too much?” I’ll be drafting requirements for the vendor to meet, if they can’t meet the requirements we will take our business elsewhere. That is a great thing about a free market.
Horror story closed? Doubt it.