Recently, Tim Ford (@SQLAgentman) threw down a challenge to a few members of the SQL Server community. His challenge was called Give Me A Coconut and Six Months, what would you do professionally if you were stranded on a deserted island and could devote all your time to these tasks. While I was not initially tagged by him to participate, I’m sure he wouldn’t mind if I played in the same sandbox. (Right Tim? *grin*)
I assumed the shared responsibility of a Teradata environment with another DBA. Generally speaking, a single full time DBA should be able to handle this particular environment with ease if it was the only thing they were responsible for doing. Trouble is we are both 95% or better committed as developers and mentors to the others on the team.
If I were able to isolate myself in my Cave left to work here are some of things that I would like to get done:
Proper Development/Test/Production environment
We currently have the luxury of using both a production and disaster recovery/development Teradata systems. However, long before the DR system came online the developers were used to developing on the production system. This meant that development databases had to be created and named separately from their production counterparts and developers also had the ability to create objects within their user account. (For those of you not familiar with Teradata, users are simply databases with passwords. They can have permanent space assigned and own any object such as tables, views, macros, and stored procedures.)
To this day old habits die hard. Developers are still working on the production system because it is easier and they have easy access to production data for testing. The other DBA and I have been working hard to break their habit (through lot of database restores to the development system from production). For the most part we have been successful when it comes to new development. It is the support of the legacy environments that has been more difficult.
Furthermore, we have gone so far as to begin work on a methodology to establish a true development and test environment on the DR system. This environment introduces new concepts such as doing ETL through views instead of directly against the tables and separating business views from views for business intelligence tools (where shortcuts and other “tricks” are done to fool the tool). We have also laid out the separation of duties that are required as well. There would be a Librarian, UNIX batch administrator, Data Modeler and DBA functions. It’s not to say those don’t exist currently, it is just that the lines are blurred and the focus is on putting the customer first. Instead, there would be a more rigorous turn over process and proper use of source code version control for all ETL and DDL. We have most of this down on paper now and a little bit of the ground work done but haven’t had the time to roll it out.
Backups
Our current backup strategy is good, but I think it could be better. Once we have implemented the process described above I think it could go a long way to helping our disaster recovery plan. With the ETL being done through views it allows me to move databases and tables around at will. I have considered organizing databases and tables in groupings of disaster recovery levels. In the event of a disaster I can simply start with my most critical database and restore additional databases as needed should the crisis continue.
The other challenge is that we currently do not have the luxury of a replication process. So in order to provide developers current data we have to rely on backups and restores between the environments. This has cluttered our archive and recovery schedule because no one wants to wait 30 minutes to tape scan a larger backup of 6 databases only to spend5 minutes restoring the actual data needed. The quick work around was to begin backing up individual databases.
Needless to say, there is quite a bit of time and effort required here to get this straightened out to find the best balance between efficient recovery time for development and a suitable disaster recovery plan as well. In lieu of replication, I dream of being able backup to a SAN and then having the SAN backed up to tape.
Maintenance Scripts
I’ve started to get some routine maintenance done automatically through scripts, but once you get started you realize there is a lot of things that you could automate. A slippery slope, so to speak. Among the things I have automated includes a statistics collection process, perm space warning (no automated perm growth in Teradata unless you write your own), dormant table identification, and secondary indexes that are not being used frequently are just a few of the things that I have automated. But there are several other things that would be of some value to automate. These include tracking table growth (replacing an inherited database growth process), identifying the top 10 worse performing workloads for the week by workload type (e.g. ETL, ad hoc, BI Tools) using DBQL, Â and a process to disable inactive users and drop dormant ones.
This work here is probably a good solid month’s worth of effort. I’m sure that once I got started down this path there are quite a few other things that would be worth spending time on professionally. So a month might be enough time to satisfy this, but many of these things would make good blog series of  “How-To’s”,  so maybe I should ask for three months. What would you do if you had A Coconut and Six Months, hypothetically speaking of course?