Tuesday, November 15, 2011

SQL Server in Healthcare

I have to give huge kudos to Damu Venkatesan for creating the SQL PASS Virtual Chapter on  SQL Server and Healthcare.  This was my topic for SQLSaturday #53 and it was the participants and the organizer Bill Fellows who mentioned me to Damu to make this happen. So, for any skeptics out there, networking does work!

I love this topic!  Healthcare IT is such an exciting field to be a part of. I've witnessed phenomenal changes in my company.  We've virtualized the heck out of or systems and implemented one of the largest EHR infrastructures in the county.  I wish I could say I was part of the EHR implementation but sadly that has fallen to the EPIC\Intersystems team with the esoteric batch of Cache' DBA's.  They can no doubt write their own paychecks now. Working for a major hospital system isn't all fun and games. With growth comes growing pains and I've seen both the good and bad side of how a company changes as it grows.

I feel privileged to be the inaugural speaker for the SQL Server and Healthcare Virtual Chapter.  My presentation will discuss why the chapter exists in the first place. I don't mean why Damu thought of it specifically but why SQL Server and Healthcare and not something like SQL Server and Banking or SQL Server and the Restaurant Industry.  I'm hoping some of the answers will surprise you and give you some insight into what a crazy environment this is and is yet to become.

Please join me on November 17th at 1 PM EST.  The link is at http://healthcare.sqlpass.org/.  Feel free to email me any questions ahead of time at spshaw70@gmail.com or send me specific items you would like me to discuss.  I'm looking forward to the presentation and I want it to be an enjoyable experience for everyone.

Monday, October 31, 2011

Post SQLSaturday 101 Thoughts

I like writing a quick blog after a speaking engagement. I probably should write it sooner in order to capture everything while it is fresh in my memory but I guess sooner is better than later.

To begin with I can't say enough about the people involved in organizing these events. Bill Fellows (@billinkc) did another wonderful job and so did all the other volunteers.  Yet again I wasn't able to stay for the entire conference.  My wife caught the Fall flu and I had to head home to pick up the kids from a Halloween party.  Since I had to leave so suddenly I want to take this moment to thank them for allowing me to speak and for putting on a great event.

As I mentioned in my previous blog there were a lot of speakers who were speaking for the first time. I thought that was the most exciting part.  I have both a love and hate relationship with putting myself out of my comfort zone and sometimes I wonder why I do it.  I think I know the answer and it is because by reaching a little farther than we think we can go we are able to grow and learn from our experiences.  It was great seeing others doing this and succeeding.  Despite what the new speakers may think - they did succeed. Each of them succeeded just by the act of doing it.

I can pretty much graph my emotions leading up to and after a presentation.  I experience nervousness, panic, confidence, enjoyment, depression, and satisfaction.  What I do know is I never do as bad as I think I did and I never do as well as I wish I had. You can't please everyone but my goal is always to try to please more than half the audience. I should probably aim higher but then I try to be realistic. As I speak more and more my expectations rise.

SQLSaturdays are especially suited for new speakers and for speakers in the mid-range of their speaking careers. The best part - the audience is always appreciative and helpful. Maybe because admission is free it helps take the pressure off the speaker and lowers the audiences' expectations. Either way, I enjoy the idea of SQLSaturdays and I only wish I had the time and resources to attend more of them. So thanks to everyone who attended my sessions (don't hesitate to contact me) and thank you to Bill and the other volunteers for putting on a great show.  It was as fun as always.  Although the people who put it together may not be too anxious for the next one, I am already looking forward to next year. See you there!

Monday, October 10, 2011

SQLSaturday #101 Kansas City

     Wow! I'm thrilled to say the least.  SQLSaturday #101 is fast approaching and I'm looking forward for a nice road trip across Missouri and forgetting work for awhile.  This SQLSaturday for me is a special event. There has been a lot of talk and worry about the timing of the event (the date is right after the PASS Summit and competes with another mid-west SQLSaturday #87 in Louisville) but that hasn't  negatively effected the content.
     Call me an optimist but I don't think the arrangement could have been better. This will be a true community event in which some wonderful SQL professionals get to a first chance at speaking.  I received my first speaking opportunity almost one year ago at SQLSaturday #53 in Kansas City so the experience is fresh in my memory and I know what they'll go through and how the emotions will fluctuate from nervousness to intense satisfaction.  The one thing I know for certain is each new speaker will be hooked.  It's impossible to speak once and forget. Each new speaker will be another wonderful addition to future lineups in community events to come.

     I'm especially excited about three new speakers who will represent Saint Louis.  John Dempsey (@jjdemps) has two presentations, Sanil Mhatre (@SQLSuperGuru) has one presentation, and Travis Whitley has one presentation.  I encourage attendees to support them and give them a large audience.  My first SQLSaturday presentation had 5 people (it was on SQL Server and Healthcare so I expected a small, but dedicated audience)!

    I'm glad to hear that despite the timing the attendance will be high.  This tells me the interest in SQL learning is very high and there is plenty of demand.  This demand is a win-win for community, for speakers, and for sponsors.  Last year I unfortunately had to speak and leave but this year after my two presentations I look forward to spending the rest of the afternoon as an attendee.  It will be fun to get a full day of learning and then sit back and relax.

   In closing I want to just say enjoy the PASS Summit. I'll be closely following the twitter feeds and I really like the pictures posted.  I'll be living vicariously through you and wishing I was there.  Also enjoy SQLSaturday #87, it will be a great event and will definitely be on my "need-to-submit" bucket list.  But I really want to take a moment to thank Bill Fellows (@billinkc) who worked hard under a difficult situation to get SQLSaturday up and running.  He's provided a great opportunity for speakers and his work has helped significantly expand the SQL community.  I can't wait to be there!

Thursday, September 8, 2011

SQL Denali Format Function

Maybe I'm going crazy or maybe I'm paranoid even though I may think no one is following me but I'm having a heck of a time with the new format function in SQL Denali.  Try this for me:


DECLARE @d DATETIME = GETDATE();

SELECT FORMAT( @d, 'dd', 'en-US' ) AS Result;
SELECT FORMAT( @d, 'd/m/y', 'en-US' ) AS Result;
SELECT FORMAT( @d, 'dd/mm/yyyy', 'en-US' ) AS Result;

My results are the following:

  The rub of the matter is my date is not 8/28/2011.  I ran this on 9/8/2011. What I did do is run this at 8:28 PM at night.  Apparently the format function is grabbing the time value from getdate() and not the date.

I always assume it is not a bug but instead I'm doing something wrong. I would appreciate anyone who has comments or who can correct the problem for me. Just so you know I'm running this on Denali CTP3.

Aaron Bertrand has a most excellent blog on the format function but he doesn't use the getdate function.  I'm open to suggestions, but please stop looking in my windows at night.

Thursday, August 4, 2011

7 out of 10 DBAs Say Statistics are Not Accurate

To make a long story short we (my ace team at Mercy) recently discovered our statistic maintenance job was not up to snuff and was actually ignoring any non-clustered and heap indexes. This was not a good thing.  As we were finding the root cause some team members created this procedure. The output is the actual and sampled rows and last updated value for statistics in each database where the table row count exceeds a million rows.  Of course you can configure the row count however you wish.   For extra credit you can create an SSIS job which will execute this on every server in your organization and send you an email. Here's the link on how to do it.

http://www.sqlservercentral.com/articles/Integration+Services+(SSIS)/61621/

Here here's the procedure. I give you the code "as is".  Pardon the inline comments.....
______________________________________________________________________

USE [master]

CREATE PROCEDURE [dbo].[sp_stats_alerts]
as
--gets stats info from sql 2005 and above
Set nocount on
SET QUOTED_IDENTIFIER OFF
SET FMTONLY OFF

Declare @dbname varchar(100);
Declare @sql nvarchar(4000);
Declare @rowcount bigint;

Create table #tblDbs (
rowNo int
,dbname varchar(100)
)

--#stats tables holds results from DBCC SHOW_STATISTICS() WITH STAT_HEADER
--creating table for sql2005 then below checking for current version of the server then if it is > sql2005
--then adding 2 extra columns for sql 2008 and above
create table #stats   (
[Servername] varchar(25) default @@servername
,[dbname] varchar(100) DEFAULT db_name()
,    [Name] VARCHAR(500)
,   [Updated] VARCHAR(500)
,   [Rows] VARCHAR(500)
,   [Rows Sampled] VARCHAR(500)
,   [Steps] VARCHAR(500)
,   [Density] VARCHAR(500)
,   [Average key length] VARCHAR(500)
,   [String index] VARCHAR(500)
)


--select * from #stats
if ((select left(cast(SERVERPROPERTY('productversion') as varchar(20)),1) )<>9)
alter table #stats add     [Filter Expression] VARCHAR(500), [Unfiltered Rows] VARCHAR(500);


--#dbccQueries Table holds dbcc queries which are later executed with exec() satatement
Create Table #dbccQueries
(
RowNo bigint
,dbname varchar(100)
,querytext varchar(1000)
)


insert into #tblDbs select ROW_NUMBER() over (order by name) rowno,  name  from sys.databases where name  not in ('master','tempdb','model','msdb')

--for single database stats uncomment below and comment above statement
--insert into #tblDbs select ROW_NUMBER() over (order by name) rowno,  name  from sys.databases where name   in ('dba')

set @rowcount =@@ROWCOUNT
while (@rowcount >0)   --1
BEGIN  --while1
select @dbname = dbname from #tblDbs where rowNo=@rowcount
--select @dbname


 set @sql=
"USE " + @dbname + "; " +
"select row_number() over (order by a.name) row_num,db_name() dbname,'DBCC SHOW_STATISTICS ( ''' + c.name + '.' + a.name + ''',''' +  b.name + ''' ) with NO_INFOMSGS, stat_header '
from sys.tables a                    
inner join sys.indexes b  on a.object_id = b.object_id                  
inner join sys.schemas c on a.schema_id = c.schema_id
where b.name is not null                
 --where a.name =  @lname
 "
              --select @sql
insert into #dbccqueries exec(@sql)

             
 set @sql=
 "Declare @sqlsub varchar(2000);"  +
 "Declare @rowcountsub bigint;" +
 "USE " + @dbname + "; " +
           
             "select @rowcountsub = count(*) from #dbccqueries
              while (@rowcountsub>0) --2
              Begin --while 2
select @sqlsub= querytext from #dbccQueries where RowNo=@rowcountsub
--select @sqlsub
"
if ((select left(cast(SERVERPROPERTY('productversion') as varchar(20)),1) )=9)
set @sql=@sql + "insert into #stats([Name],[Updated],[Rows],[Rows Sampled] ,[Steps],[Density],[Average key length],[String index]) exec (@sqlsub)"
else
set @sql= @sql +"insert into #stats([Name],[Updated],[Rows],[Rows Sampled] ,[Steps],[Density],[Average key length],[String index],[Filter Expression],[Unfiltered Rows]) exec (@sqlsub)"
set @sql=@sql + "set @rowcountsub = @rowcountsub - 1
           
              End --while 2
              "
--select @sql            
  exec (@sql)
  --select * from #dbccqueries
  truncate table #dbccqueries

set @rowcount = @rowcount -1
End  --while 1

--select * from #dbccQueries order by 2,1
--select * from #tblDbs
PRINT @@servername
PRINT ''
select [Servername] Instance,
dbname as [Database],
Name   as [Stat Name],
[rows] as [Actual Rows],
[Rows Sampled]  as [Sampled Rows],
Updated as [Last Updated]
 from #stats
where [Rows] is not null
and
[Rows Sampled] < [Rows]
and [Rows] > 100000

drop table #tblDbs
drop table #dbccqueries
drop Table #stats
______________________________________________________________________

Kudos to Travis Whitley and Ravi Rangineni for pulling this together on short notice.

Tuesday, July 19, 2011

Red Gate's Exceptional DBA Award

Along with 4 other DBA's I received notice of my esteemed spot as a finalist for RedGate's Exceptional DBA Award.  It reminds of the time I watched the show on the Discovery Channel where they film the Navy Seals training. One of the best lines in the show is when the Instructors ask the candidates what second place is and they all reply "First Loser! Sir!".  Well, there are times when being second place, or third, or fourth, or even fifth isn't all that bad.

Don't get me wrong. I'd love nothing more than to win. Both for the honor and acknowledgement of years of hard work but also for the opportunity to go to PASS which I've never been able to attend. I met tons of great people at the SQLRally and it would be great to see them again and meet new friends.  I also spent an interesting time in my life in Seattle. I lived downtown, worked as a concrete finisher, and basically starved. But it was a great place and I'd love to see how it changed.

There's some fantastic competition which makes being a finalist all that more exciting. Colin Stasiuk is extremely active in the community and already has a wonderful following that is hard to compete against.  There is also Tom Hill who I actually had the pleasure of working with at Anheuser-Busch and he also worked with many of my friends at Monsanto and they all have good things to say about him. I wish all the candidates the best of luck.

As for me, please go to the RedGate website and take a look.  Hopefully you'll see a word or phrase or something familiar in my experiences which may help win your vote.  Oh, also, if you vote for me and I end up winning - talk to me and we'll have a beer at PASS. :)

VOTE FOR PEDRO (OR SCOTT)

Tuesday, July 12, 2011

The Skinny on SnapManager for SQL Server

Maybe, but not quite yet

I've been implementing SnapManager for SQL Server in our production environment for about 8 months. Every new system coming online during that time is configured to use SMSQL as the standard backup and recovery tool and we've also managed to retrofit some existing systems to use the tool. All-in-all we have roughly 50 servers runing SMSQL and managed by SMSQL. 

During our virtualization project we, meaning the DBA team, were basically dictated to use the tool. Our storage was all moving to NetApp and lots of dollars exchanged hands so the storage team wanted to squeeze the most out of what NetApp had to offer. We had online conference calls with NetApp discussing the tool and we had a NetApp engineer onsite to outline how the tool works and how the systems should be configured.   

I have to say I was intrigued by the snapshot technology and I still believe it is a great way to back up databases. Early on though I had my reservations. Configuration was difficult (I have an article discussing SMSQL configuration at http://www.mssqltips.com/tip.asp?tip=2294) and I could tell right away that using the SMSQL management console to manage a lot of backups was going to be a problem. Still, I learned it and we implemented it but as of today I'm a bit older and wiser and we've recently gone back and revisited our approach.  Let's look at some recent discoveries.

You will begin to have difficulty managing an environment where the servers running SMSQL exceed twenty. 
The dreaded hourglass. You'll see this
often in large SMSQL environments
This is of course a rough estimate. I think of it as the MS Access rule.  You wouldn't normally want more than 10 people using a single MS Access database,  well you aren't going to want more than 20 servers listed in SMSQL management console.  The application bogs down to an unbearably slow pace.  We have about 50 servers listed and if we close the console and reopen it we won't see the server list again for 30 minutes.

Problems between SnapDrive and virtual machines cause random errors
SMSQL is simply an interface calling Snapdrive. There are known stability problems between Snapdrive and vMotion.  We constantly face errors when SMSQL cannot see the LUNs or the drives are invalid.  This doesn't mean the system is down - it just mean SMSQL has lost its way. Unfortunately, backups will not occur.  The worst problem is you may fix this one day but two days later the same problem will occur on the same server. 

Disconnect between SMSQL and SQL Server
SMSQL does not actually "talk" to SQL Server. The only time the two ever meet is when you first configure a job and SMSQL calls the SSMS to create and schedule a SQL Server job. The job is a command like executable which calls the service running on the database server. The list of databases and other backup instructions is hardcoded in the command line. The problem here is if you add or remove a database the job has to be manually updated.  But just because you manually update the job doesn't mean the SMSQL console configuration will recognize the change. Basically, you have to wait until all user databases are created prior to configuring SMSQL on the server. If a new database is added or removed you have to manually reconfigure SMSQL.  Doing this can be a problem for even small shops. Also keep in mind that you may have a successful SQL job but the snapshot backup might still fail. You should not use job success or failure notification as your primary means of determining whether or not backup was successful.

Configuration requirement make retrofitting servers unpractical. 
SMSQL has strict LUN requirements. I'm not going to discuss them here but let's just say if you have a large amount of server already in place then reconfiguring these to use SMSQL may not be worth the effort. We have around 200 production servers and we estimated that to reconfigure these to use SMSQL would take an excess of a thousand man hours. That's just not going to happen.

Man hours wasted
With only 50 servers using SMSQL we have one DBA devoted full time to backup and recovery. Prior to SMSQL we had 1/2 DBA managing 300 servers.  This is due to the random nature of the errors. The backup failed list never shortens because the same problems occur over and over and NetApp does not yet have a resolution. Throw on top of this the application slowness and we have one over-worked DBA.

So, what's the answer?  We have already decided as an organization to begin moving systems off of SMSQL and back to Idera's SQLSafe.  We just cannot risk our ability to recover our systems. I also want to point out that the NetApp engineer we talked to about these problems has been honest and understanding. So far, NetApp has shown a great desire to put forth the effort to produce a better product. I hope to be working with NetApp engineers and walk them through our real-world scenarios.  My hope is they will gain a better understanding of what DBAs require to successfully manage backup and recovery for large SQL Server environments. We plan to still keep all new systems configured as if they will run SMSQL. Maybe one day a useful tool will be made to manage a very useful technology. 

Tuesday, June 28, 2011

Microsoft SQL Server's UCP (Utility Control Point)

I truly wanted to love this feature http://www.microsoft.com/sqlserver/2008/en/us/r2-multi-server.aspx but, alas, it was not meant to be.  I've been sorely jilted but maybe you might find it more useful so let's start the story of this tragic love affair from the beginning.

Setting up UCP is straight-forward. You will need a copy of SQL Server 2008 R2 Enterprise Edition to run the application. Once you have the correct version its all a matter of opening up SSMS and clicking on the Utility Explorer:

From there you'll get your standard "do not show this page again" splash screen:


First thing you need to do is create a Utility Control Point. This is where you'll need to spend the money on a Enterprise Edition. Most likely the server you choose will be your primary Central Management Server. In our case it was and it was also our MDW server.


From here you will specify an account. We used our SQL Server agent account.  You will then run a validation. The utility requires a number of different validations. Some highlights include version 10.5 or higher. SQL Server edition, existence of a database named sysutility_mdw and collections sets must be stopped.


Since the UCP server and any subsequent servers we enrolled were already used in an MDW architecture the  collections set validation caused us some real headaches. Simply stopping the collection set jobs and\or disabling the collection service was not enough to cause the validation to succeed. You have to manually stop each collection set through SSMS as seen in this screenshot:


Once everything validated we could move on and enroll a server.  You return to the original screen and select the option to enroll a server. The process is basically the same so I won't bore you with the details and the excessive screen shots

So now we were ready to hit-the-ground-running and see what we can do with the tool. This is where we ran into the limitations. It's not a long list but for us they were show stoppers.

You can only manage a maximum of 25 servers
For us this ended it. 25 maximum instances for a multi-instance management application! 25 wouldn't even cover 1/3 of the number of servers we would like to manage.  I think that's a malevolent limitation considering you need to purchase Enterprise Edition to even use the tool.

The database name sysutility_mdw is non-configurable. 
As I mentioned before we implement MDW on a grand scale. Our MDW database is called..well...MDW. If we happened to call it sysutility_mdw then installing the UCP would have required us to wipe it out. When enrolling an instances in UCP that is already using MDW, the UCP enrollment routine will change all the collections to go to the sysutility_mdw.  We could change the collection set to send data, including UCP data to our MDW database but the UCP utility on the manager would not see the MDW database.  It's basically all or nothing. Everything must go to a database called sysutility_mdw.  This is frustrating because the database created for MDW and the database created for UCP are basically the exact same database. Why Microsoft couldn't allow you to call the database whatever you want is beyond me.  My guess is one Microsoft team worked on MDW and another one worked on UCP and they didn't like each other.

You cannot configure the 15 minute upload interval
This wasn't too big of a deal for us but it is frustrating. You can schedule the data collection on enrolled instances but you cannot change the 15 minute upload interval on the UCP server. Why? Only the boys in blue know for sure.

Default policies are limited and there is no integration with Policy Manager
UCP policies are basically CPU usage and database file storage. And, from what I can tell, there is no means to add policies to the UCP dashboard. This is what you are stuck with out-of-the-box:


This might be a cool dashboard to impress a manager but, as a useful tool to managing your critical systems, it's worthless.

One other thing to keep in mind before I end.  When you designate the account running the UCP collection set, by default all the cache files will go into that account's temporary user directory on the C drive. I suggest you change this. Also, do not manually run and upload the UCP collection set job. This throws it out of whack and you'll get primary key violations.  To resolve the error you'll need to delete all the cache files in the cache directory.

Links to all these problems are here:
http://support.microsoft.com/kb/980497
http://technet.microsoft.com/en-us/library/ee210570.aspx

On a positive note I think UCP has potential and I'll keep it in mind as versions progress. For now I cannot recommend it as a serious tool for multi-instance management and I would suggest your time is better spent on other technologies.

Sunday, June 19, 2011

Useful Features in SSMS for Managing a Large SQL Server Environment

I really enjoyed presenting at SQL Rally about challenges DBAs face in large SQL environments.  My evaluations were not as stellar as I had hoped (I actually ended up pretty much setting the average bar) but all the comments against my presentation were spot-on.  Mostly they were that the presentation covered too much and was too general.  I had actually said this prior to my presentation so maybe I'm partly to blame for planting the seed. The past is the past and that presentation really was designed as an outline to provide fuel for thought - and future blog posts and articles!  This is where this post comes in.  

Countless articles have already been written about SSMS features. SSMS has certainly come a long way since the days of Enterprise Manager.  One way Microsoft has advanced the tool is in the realm of managing a large number of instances. I think people overlook some of the features developed in SSMS 2005 and continued into SSMS 2008. I've been managing large environments for years and I'm even still discovering new tips and tricks that make my job more efficient. Here are few notable points.  Again, the details of these can be gathered in other technical postings and some I may elaborate on in future posts, but here's the gist:

1) Use CMS (Central Management Server)
This is a no-brainer for a large environment. Out of all the changes to SSMS I truly believe this is the single greatest change.  Simply put CMS allows you to designate a SQL Server as repository for all your registered servers.  Why is this important?  Because of CMS all DBAs on a team now have a single, standard list of all the registered SQL Servers in the enterprise. Additionally, in the age of VM we add a server almost every week (I built 3 last week) and now all I have to do is register it in the CMS and everyone sees it. I use to have to send out a notification telling everyone to register the server and I would have to assume they did. Many times people forgot and I now have a DBA who doesn't realize we manage a server. This can prove a bit awkward for after-hours support calls.   There are other advantages to using CMS that I'll get into in later posts.

2) Using Groups to Add Servers to Multiple Categories
In a large database environment information comes to you in many forms depending on the where the information originates. For example, a non-technical user may inquire about a database problem but have no idea of anything beyond the application name listed as in icon on his desktop.  You also may get requests from internal IT folks like the server or storage teams who know nothing about the database but do know the server name.  The solution for this in SSMS is to create multiple groups.  You cannot put the same server name multiple times in one group but you can add the same name to different groups.  Here are my groupings - your mileage may vary:


Here's my reasoning. The "ALL SERVERS" is my list of all the servers in the environment. If someone comes to me with a server name,  I look here to see if we support it. If it's not listed then we don't support it.

Under "APPLICATIONS" is a list of more groups with names like "SHAREPOINT", "XEROX", and "LENEL".  Each group is the general name of the application and under that group is the registered server.  This allows me to quickly find the server if a user refers to the system by the application name.  the "DEPARTMENT" group I don't use too much but it allows me to separate out internal systems from external systems. In this group there are other groups listed such as "CITRIX TEAM", "MESSAGING TEAM", and "SERVER TEAM".  Each group has their own list of registered servers.

"LOCATION" is helpful if you have a geographically distributed environment. If  a user knows no other information except for where their server is physically located then this group allows me to have a fighting chance of tracking it down.

"NONPROD" and "PRODUCTION" help me separate our servers for monitoring and support. The screenshot doesn't show it but under "PRODUCTION" I have groups named "SQL SERVER 2000", "SQL SERVER 2005" and "SQL SERVER 2008" which is used for Policy Management.

Finally, there is "POC" which stands for "proof of concept". These are servers created by various teams for testing or product selection.  I keep these separate because they tend to (hopefully) only stick around for a short amount of time but I want to keep track of them so I can remind the users that they eventually need to go away to free up licenses.

3) Use Templates (and Soon-to-Arrive Snippets)
Templates I've found are a rarely used feature and I've had difficulty getting my team to use them.  Templates in SSMS lack a lot of functionality but I think it's a good start.  Here is a link to a Tim Ford article at MSSQLTips thoroughly explaining templates in SSMS: http://www.mssqltips.com/tip.asp?tip=1465.  Right now templates aren't great for collaboration but they do allow you to somewhat standardize your scripts and easily access them. I keep a list of a few key scripts and pass them out to my team when I update any of them.  I know Microsoft has some plans to expand the idea using snippets.  Snippets will be similar to templates but allow for a centralized source for code.  Aaron has a good summary on his blog here: http://sqlblog.com/blogs/aaron_bertrand/archive/2010/11/23/sql-server-11-denali-the-new-vs-shell.aspx

These 3 simple items will do wonders to help you manage your large environment.  They also help keep your team in sync.  The Central Management Server is the groundwork for implementing additional services. In another blog post I'll talk about more using services like Policy Management, MDW (Management Data Warehouse), and Multi-Instance Management to help monitor large, complex environments.

Wednesday, May 18, 2011

When Databases Get Depressed

I received this support ticket today.  If I had gotten there sooner I might have been able to prevent this tragedy.

Sunday, May 15, 2011

SQLRally Behind Us

Despite some inclement Orlando weather causing flight delays I would say my first SQLRally was a success. The information I learned from the presentations along with the people I met and the experience I received presenting made for a 4 day whirlwind adventure.

DBA's new to SQL Server need to understand just how special the SQL Server community truly is compared to other communities. It isn't just a matter of having more knowledgeable presenters or more swag to give away but it is their helpfulness and hard work focused on spreading knowledge and advice.  It is the SQL Server community's willingness to take chances on new speakers (like me) and give us an opportunity to tell our stories. It would be easy for only a few big names to dominate the discussion - but they don't. It is great to have them there (Ed Wilson: ScriptingGuys comes to mind as does Kevin Kline blog) but it i also good to see new faces with different perspectives.

After the SQLRally I am both energized and exhausted. I still have lot's to do. I'll be working on some VM articles as well as continuing to work on the book revision (T-SQL).  I will also be posting blogs and maybe some articles based on the great feedback I received from people who attended my presentation. I still regret I wasn't able to answer every question and talk to everyone who needed to talk.

For now though I want to give my sincere thanks to all who voted for my presentation and to everyone who attended, as well as those that said they wanted to attend but couldn't. I had a fantastic time and it is a good feeling to finally be able to give back to a community that has given so much.

Friday, April 8, 2011

SQLRally Day Two

I'm guessing day two will be a difficult day after a night enjoying the sites and sounds of Orlando. I plan on trying to pace myself but I've never been to good at doing so in the past.  These are the sessions I intend to participate in:

8:30 -  I plan to attend Kendra Little's presentation "Big and Tall: When to Partition".  The reason is two-fold. I'd like to see Kendra present and I learned yesterday I'll be heading up a project to configure a 10 TB SQL database which will contain PACS data. I see partitioning in my future.
9:40 - "SQL Server Troubleshooting via DMV's" by Steve Schneider.  I'm always excited about learning the ins and outs of DMVs. I hate to admit it but I don't use them enough. Bad DBA!
11:15 - Since my presentation is about managing large number of servers I have to go to John Sterrett's presentation "Evaluate Your Daily Checklist Against 100+ Instances of SQL Server While You Get a Cup of Coffee".  My only regret is that John's presentation isn't before mine so I can't steal any of his material. Just kidding John.
1:50 - After a lunch of dead cow and fried bacon I'll attend Thomas Grohser's "SQL Server Storage Engine Under the Hood: How SQL Server performs IO".  I rarely have time on my own to dive deep into the database engine black box so I hope to take this opportunity to have the material forcibly injected into my skull.
3:00 - Ed Wilson's "Windows PowerShell Best Practices for SQL DBA's". This should be a good compliment to Aaron's presentation. After both sessions I intend to be able to write a 1,000 lines of PowerShell code Friday night in my hotel room.
4:35 - Lastly I'll be at Jeremiah Peschka's "Fundamentals of SQL Server Internals".  I'm due for a refresher course especially since none of the previous stuff I've learned has managed to be retained in my brain.

That's it. Now that I put it all down in writing is seems like one heck of worldwind of knowledge. It'll be fun. I'm already stocking up on Four Loco!  Hope to see you there!

Thursday, April 7, 2011

I've Been Very Selfish

I've been so concerned with cranking out my SQLRally presentation (yeah, that's right - its not completed yet. I like to live on the edge) that I haven't thought about the other sessions I plan on attending. Since I'm new to the SQL Server community I'll base my decisions not only on the content of the presentation but also on the speaker.  I enjoy public speaking and I look forward to learning from others who have been doing it much longer than myself.

Thursday, May 12.
8:00AM - I'll be attending Aaron Nelson's presentation "The Dirty Dozen: Powershell Scripts for the Busy DBA".  Next to VM I believe Powershell is the "thing" right now. Also, I'm a DBA AND I'm busy, so it makes perfect sense.
9:40 - All the presenters are new names for me so I'm going with the most relevant topic.  I'll attend Eddie Wuerch's "Troubleshooting Performance Problems by Reading the Waits". This will be a good tie-in to my attempt at getting Ignite licenses purchased.
11:15 -  Looks like I'll be at Wesley Brown's presentation on "Understanding Storage Systems and SQL Server". Maybe he can help me lift the lid on the black box.
1:50 - Sorry folks, as much as I would love to see the other presentations I can't be at 2 places at once. Obviously <smirk> the Enterprise DBA presentation is a must-see.
3:00 - By this time I'll be wearing a bit thin.  After absorbing tons of knowledge, eating lunch, and pontificating I have a feeling I'll be hitting a wall. I'll attend Stacia Misner's "Data Visualizations in SSRS 2008 R2" and I'm sure to feel much better.
4:35 - As the final coup d'etat I'll attend "Aaron Betrand's What's New in SQL 2011 (Denali)".  The reason is both because Aaron and the topic. I'm in the middle of revising for 2011 Kathi Kellenberger's classic tale about T-SQL (Beginning T-SQL 2008). I want to make absolutely sure I don't miss anything.

That's it for day one. One last thing before I talk about day two tomorrow. I'll be attending Grant Fritchey's "Query Performance Tuning from Start to Finish" pre-conference seminar because you can never get enough performance tuning training!

Tuesday, April 5, 2011

Where Have all the Good BA's Gone?

Something I see rarely discussed in technical circles is the role of a Business Analyst.  Now what's a Business Analyst?  That's the tricky part. If you aren't in a large corporation that has multiple departments as "customers" or that handle a portfolio of vendors than you may not have come across one.  A BA is the liaison between the  technical team and the customer\vendor.  They are part IT and part project manager.

It's not an envious role. They perform functions like software upgrades, troubleshooting application errors, and handling customer questions about performance.  They tend to be tied to a specific set of applications. They have to be communicative, technical, patient, and self-starting. Unfortunately, many of them are none of these and when they're not it creates havoc for a DBA.

When a BA goes rogue it basically means you end up being involved in poorly managed upgrade projects, you have to perform all the initial troubleshooting footwork for an application you know little about, customers contact you directly for minor problems that have nothing to do with the databases....I think you get the point. But then you might ask, "shouldn't you be doing these yourself anyway?".  The truth of the matter is its a resource issue. As a DBA in a large company you may be supporting 300+ different applications and thousands of customers.  You have to have a few good BA's to help filter the noise.

I've come to believe the primary problem is hiring managers don't stress enough the technical requirements of a BA.  When a newly hired BA doesn't even know how to configure an ODBC connection than we're in for a world of hurt. The other problem is they don't get paid enough. You aren't going to get highly technical people moving into a BA role if you make them take a 40% pay cut.

Still, when you see and work with a good BA you know it. They tend to give you clear, documented action items. They do all the preliminary research so that you know once they come to you its a database issue and not security, network, or storage problem.  They have your back when vendors try to question the motives of the DBA team and when they inevitably try to gain access to database systems without your knowledge.  A good BA is invaluable to a company and a DBA team. I just wish there was more of them.

Comments? I'd like to hear your experiences!

Wednesday, February 2, 2011

MSSQLTip Article Part Deux

Here is a link to my most recent MSSQLTip article for February. I'm still discussing the ins\outs of virtualizing SQL and using SnapManager for SQL from NetApps. Enjoy!

http://www.mssqltips.com/tip.asp?tip=2294&home

Monday, January 31, 2011

PASS SQLRally

Wow!  My presentation on the Enterprise SQL DBA was voted for the first PASS SQLRally in Orlando, FL. This will be a wide-topic presentation discussing how best to manage a large number of SQL servers (300 +).  I will also discuss things like team dynamics, working with vendors, working with customers, and the politics of large corporations. The presentation dates are May 12-13.

Now comes the first lesson in corporate politics: how to get funding for a trip......

Thursday, January 6, 2011

MSSQLTip Article

Here's a link to my most recent MSSQLTip article.
http://www.mssqltips.com/tip.asp?tip=2220&home

I've had a lot of good responses to it. This is a tough and confusing topic for DBA's (including me!). I expect to write more to help generate more discussions and good advice.

Tuesday, January 4, 2011

Etiquette for the New Year

It's difficult as a DBA to come back from an extended holiday vacation.  You go to work and find your email full of non-critical but critically annoying monitoring event notifications, messages from business analysts asking about project updates, infrastructure folks looking to have SQL installed immediately, and a lot of drive-by's. So, what are some good pointers?  Here are a few I've come up with:
  1. Think before executing that drop statement. New Year revelries can dull the brain and lead to problems.
  2. Make sure backups are working. Though always a good thing you just want to be extra vigilant to correct   backup failures immediately because they might have been occurring for awhile if you've been away.
  3. Be nice to people. No sense in spreading the tension. No one else wants to come back to the office so might as well not make things worse for them. 
  4. Breathe deeply. Take your time. Organize your inbox into what is most critical and work your way down the list.
  5. Don't expect a whole lot from anyone the first day back. Take your time building your expectations. Others won't want to work overtime on Monday and hopefully they won't expect it from you.
  6. (Optional) Liquid lunches are perfectly acceptable on the first day back from the New Year holiday. If your boss complains just tell them I said so.
These pointers may be good anytime but put them in bold for the first day back from vacation. Be careful out there! You only get one day to set an example for the rest of the year.