Albuquerque SQL Saturday

I’m excited to announce that I’ll be presenting “Fast and Furious Dynamic SQL” at SQL Saturday #358 in Albuquerque on February 7th, 2015. This will be my first time as a PASS presenter, so please save your best heckling and rotten vegetables for the occasion.

The session will cover various tips and tricks that are generally related to running sp_executesql from within a stored procedure. It will focus on performance, security, and maintainability.

Viewing the Plan Cache

I’m working on a script and wanted to know how SQL was caching the query plan. So I wrote this little snippet. The only real value I’m adding here is getting the XQuery correct to pull out the SQL itself.

-- Run your query here
        qp.query_plan.value(N'(//@StatementText)[1]', N'NVARCHAR(MAX)')
    FROM sys.dm_exec_cached_plans AS cp
        CROSS APPLY sys.dm_exec_query_plan(cp.plan_handle) AS qp;

Of course, I don’t recommend that DBCC in a production environment. But in a dev environment, it makes things easy.

Robinson’s Law

I coined this a while back. I just came across it again, and thought I’d share it. How many of us know this to be true?

“An otherwise inexplicable bug will become easily apparent to the programmer who caused the bug and has subsequently been searching in vain for the bug only and immediately after he announces to his peers that he cannot find the bug, admits defeat, and asks for help.” – Robinson’s Law

Pages Used

One of the items in my toolbox is this simple query that returns the number of pages used by different tables in the database. The nice thing is that it gives one simple number that includes indexes, lob data – basically the whole table.

  FROM sys.dm_db_partition_stats AS ddps
    INNER JOIN sys.objects AS o
      ON ddps.object_id = o.object_id
    INNER JOIN sys.schemas AS s
      ON o.schema_id = s.schema_id
  ORDER BY SUM(ddps.used_page_count) DESC;

I just used this on a database today to find that there are some leftover tables from a migration a while back that are eating up a considerable amount of space. Time to run some DROP TABLE scripts!

Data Overflow

Up until a couple of weeks ago, my impression of Stack Overflow was just simply as a site that showed up on my Google search results when investigating coding issues. Sometimes I found the answer I needed there, sometimes not. And then, while in one of the sessions at PASS Summit 14 – was it one of Kendra Little‘s? – the presenter mentioned that Stack Exchange makes their data available as an anonymized dump. You can find it at, hosted by the Internet Archive. It’s quite a large chunk of data – I’m grabbing just the Stack Overflow stuff now and intend to use it for some presentations that are in the works.

Now versus UtcNow

This is a minor topic, but it was such a drastic difference when I found out about it, that it’s definitely worth sharing. Consider the following code:

 var begin = System.DateTime.UtcNow;
 for (var i = 0; i < 10000000; i++)
 var foo = System.DateTime.Now;

The result on my machine was 9150 milliseconds. Now try this:

 var begin = System.DateTime.UtcNow;
 for (var i = 0; i < 10000000; i++)
 var foo = System.DateTime.UtcNow;

The result was 152 milliseconds. Really. Not kidding. Yes, there are better ways of measuring, but that one worked well enough in this case. Apparently, the framework uses UTC natively, and then converts to local times when it matters. And that conversion comes with a huge cost!

Now you know. And knowing is half the battle.

/* Why didn’t I think of this years ago? /*

So, I attended the PASS Summit last week in Seattle. If you’ve never been, I highly recommend it. Especially if it’s in Seattle. Yes, the character of the town is basically dark, rainy, and uphill both ways. But the coffee is good, the seafood is better (especially for a midwesterner like me), and the place is just simply one of the nicer downtowns I’ve seen.

But the PASS Summit itself is also worth it, Seattle or not. You pick up a lot of great stuff, much of it at a very deep level. Adam Machanic’s pre-con on Tuesday, for instance, was such a deep dive into parallelism that my eyes were glazed over by the end of it.

And then there are the little things. The stuff that make you want to kick yourself for not thinking of it before.

I write a lot of dynamic SQL. I have tons of stored procedures that include this little gem:


And then hundreds of lines of conditionals and string concatenation and parameter substitution, followed by sp_executesql.

But what had never occurred to me was this:

DECLARE @stmt NVARCHAR(MAX) = N'/* dbo.NameOfCallingStoredProcedure */ ';


And now when it’s six months later, and I get a call from a DBA who’s trying to troubleshoot a query plan that’s gone sideways, and neither of us know or remember what’s actually generating that query.

It’s the little things, sometimes, that make the big difference.

By the way, this came from Jeremiah Peschka’s AD-302 session, Dynamic SQL: Build Fast, Flexible Queries.

Wide open until you see God, then brake

And so it begins…

Who am I? Professionally speaking, a developer. A software engineer. I write code. I’ve been doing it for years, in a few different languages. I’m sure I’ll bring up various pieces of my experience as time goes on, but these days I write some of the core code – the plumbing, if you will – that makes up the Salesforce Marketing Cloud. It’s a pretty big shop, and I’d like to think I’ve learned a few useful things along the way. So I’ll try to share some of those things here.

That means that the main point of this blog is to talk about code – specifically, code that acts on big data. It’s going to be primarily C# and Transact-SQL, because that’s how I spend most of my time these days. But not all of it.

Anyway, if it interests you, great! Thanks for spending a little time here. I’d appreciate your feedback.

As for the title of this post, well, that’s how I do things. Checkers or wreckers, baby. Godspeed!