T-SQL Tuesday #104

My thanks to Bert Wagner and his chosen topic for T-SQL Tuesday, Code You Would Hate To Live Without. It was just enough of an excuse to dust off the cobwebs here and get back to posting.

Anyway, since half of my time is spent in C#, I thought I’d venture into that world for my response. I’ll share a couple of common extensions that I include in most of my projects. Extensions, as their name implies, extend the functionality of existing objects. Here is a code snippet with a couple of extensions I typically add:

namespace myproj.Extension
{
  public static class Extensions
  {
    public static bool In(this T val, params T[] values) where T : struct
    {
      return ((System.Collections.Generic.IList)values).Contains(val);
    }

    public static object ToDbNull(this object val)
    {
      return val ?? System.DBNull.Value;
    }

    public static object FromDbNull(this object val)
    {
      return val == System.DBNull.Value ? null : val;
    }
  }
}

The first method enables me to easily search enumerations for a given value. For example, if I’ve defined this enumeration:

namespace myRacingProject.Enum
{
  public enum Series
  {
    None = 0,
    Indycar = 1,
    IndyLights = 2,
    ProMazda = 3,
    Usf2000 = 4
  }
}

Then I could use the extension like this:

if (mySeries.In(Enum.Series.ProMazda, Enum.Series.Usf2000)) myChassis = "Tatuus";

As for the other two methods, well… When is a null not a null? When it’s a System.DBNull.Value, of course! SQL Server pros who have spent any time in the .NET Framework will recognize this awkwardness:

var p = new System.Data.SqlClient.SqlParameter("@myParam", System.Data.SqlDbType.Int);
p.Value = (object)myVar ?? System.DBNull.Value;

With the extension, the second line becomes:

p.Value = mVar.ToDbNull();

Similarly, when reading, this:

var myInt = (int?)(myDataRow[myIndex] == System.DBNull.Value ? null : myDataRow[myIndex]);

Becomes this:

var myInt = (int?)myDataRow[myIndex].FromDbNull();

They’re not earth-shattering improvements, but my real point is that extensions are an often-overlooked feature that can improve your quality of life as a developer. Anytime you find yourself writing the same bit of code over and over, especially if that bit is rather unsightly, you might consider making it an extension.

Want to know more? Here ya go: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/extension-methods

#PASS24HOP

In the past couple of years, I’ve had the opportunity to speak at a few SQLSaturdays, and presented to my local chapter, IndyPASS, a couple of times. I gained a new experience a few weeks ago, one that I had no idea would be so timely. I presented one of my sessions to YooperPASS, a new chapter in the upper peninsula of Michigan. This session was my first online session for PASS. I’ve presented online before at Salesforce, but that was to an internal audience. This was a new session, presented for the first time, to a bunch of strangers. Fortunately, the YooperPASS folks are very welcoming, and I think it went very well.

I mentioned this was timely, though, didn’t I? Well, that’s because I’ll be giving my second ever PASS virtual session on Thursday, the 20th. I’m a part of the 24 Hours of PASS: Summit Preview 2017! The same session I gave to YooperPASS, A Guided Tour of the SqlClient Namespace, will also be part of this webinar event. Because I’ll be presenting this session to a much wider audience now, I’d like to expand on my approach to this session.

First, I call it a “Guided Tour” because I’m cherry-picking those areas that I think deserve some special attention. A more comprehensive session would be impractical. So I had to cut things off somewhere. What made the cut are four topics that have valuable specific takeaways. A good technical session should give attendees something today they can put into practice tomorrow. I’d like to think I accomplished that with these topics.

Second, my choice of topics also gives me a chance to get across a broader message about how to scale ADO.NET code. Performance is always a focus of mine, and my experience has taught me that there is frequently a trade-off between application code and the database. One piece of code may perform better than another under a microscope, but its effect on overall system performance may actually be worse. In this session, I try to draw attention to this trade-off.

Finally, this is my attempt at an introductory session on an often-overlooked topic. Let’s face it, ADO.NET has been around a while. Because of that, I think the information available on it is inconsistent. StackOverflow didn’t exist until long after ADO.NET was introduced. Yes, there are some fine older resources on ADO.NET, but enterprises have changed a lot since many of them were written. Most current resources focus on Entity Framework, which is arguably just an abstraction layer over ADO.NET. So my session is intended to close the gap.

Anyway, I’m looking forward to being a part of this 24 Hours of PASS. If you tune into my hour, I hope it’s worth your time.

Godspeed,

J

Threading

A while back, I wrote an app that spawned a collection of threads to run some work in parallel, using the resources in the System.Threading namespace of the .NET Framework. Some time after that, I worked on another app that also had a threading component. This second app was reviewed by another developer from outside my immediate circle. He asked, “Why didn’t you use the System.Threading.Tasks” namespace? Uhh… because I didn’t know it existed?

That namespace was introduced in .NET Framework 4 – not exactly recent history – but I had somehow missed it for quite a long time. There are a few causes for that, but the one I’d like to focus on here is a trap that I think catches many developers at one time or another: We think we have it all figured out. While we are, to some degree, practical mathematicians – professionals who assemble algorithms to meet requirements – we are also creators. Our code is our art. And oftentimes, we don’t have the humility necessary to accept the possibility that our art isn’t beautiful. So we shy away from having the right people check our work.

This reminds me of an old saying: If you’re the smartest person in the room, then you’re in the wrong room.*

Now, this is not a commentary on my current team. I work with some really smart people, and I’m very grateful for that. But while my teammate may be one of the best PHP or Node.js coders I know, that doesn’t necessarily translate to an expertise with the .NET Framework. The true test is this – no matter how smart they are, if they’re not catching my mistakes, then I’m not being held accountable.

Lesson 1: Make sure someone’s catching your mistakes. If they’re not, then do you really think the reason is that you’re not making any?

So, back to the two apps… After the other developer’s feedback, I reworked the second one prior to release, and it passed its code reviews. The first app, meanwhile, developed some bad behavior in production. There was definitely a race condition of some sort, but I couldn’t seem to nail down where it was. I made a couple of adjustments to the code, but nothing seemed to bite. Of course, I couldn’t reproduce it in testing either.

Finally, I ripped out the threading code entirely and replaced it with nearly identical code based on System.Threading.Tasks. I was concerned about the risk of introducing more bugs, about the fact that I was still unable to reproduce the problem, and about how long it had been a problem, so I tried to remain as faithful to the original design as possible. And, yeah, honestly, I crossed my fingers.

Once this new version was released, the problem was gone.

Lesson 2: System.Threading.Tasks really is better than System.Threading.

I’ll never know what exactly fixed the problem. I could keep researching it, but the costs to me for that aren’t quite worth the benefits at this point. My takeaway was that the new stuff just simply works better. Whether that’s because it’s easier to use the right way (and harder to use the wrong way) or its internals are less buggy or some combination thereof, the end result is the same. I hope that’s old news to anyone reading this, but I wanted to share my experience just in case.

* I was unable to identify with certainty the source of this phrase. The leading candidate I found was 1962 Nobel Laureate James Watson.

Now versus UtcNow

This is a minor topic, but it was such a drastic difference when I found out about it, that it’s definitely worth sharing. Consider the following code:

 var begin = System.DateTime.UtcNow;
 for (var i = 0; i < 10000000; i++)
 {
 var foo = System.DateTime.Now;
 }
 System.Console.WriteLine(System.DateTime.UtcNow.Subtract(begin).TotalMilliseconds);
 System.Console.ReadLine();

The result on my machine was 9150 milliseconds. Now try this:

 var begin = System.DateTime.UtcNow;
 for (var i = 0; i < 10000000; i++)
 {
 var foo = System.DateTime.UtcNow;
 }
 System.Console.WriteLine(System.DateTime.UtcNow.Subtract(begin).TotalMilliseconds);
 System.Console.ReadLine();

The result was 152 milliseconds. Really. Not kidding. Yes, there are better ways of measuring, but that one worked well enough in this case. Apparently, the framework uses UTC natively, and then converts to local times when it matters. And that conversion comes with a huge cost!

Now you know. And knowing is half the battle.