T-SQL Tuesday #106

This is a response to T-SQL Tuesday #106, Trigger Headaches or Happiness, by Steve Jones.

I can only recall one time in the past several years (at least a decade) that I’ve found triggers to be useful. It involves data migration.

The problem: You have a massive, high-activity table. Let’s call this Table_A. You need to make significant changes to it. For example, the clustered index needs to change. How do you accomplish this?

The solution: Create a script that does the following:

  1. Create a second table, which we’ll call Table_B. This one will eventually become the new Table_A, so design it with your changes in mind.
  2. Create a third table, which we’ll call Table_C. This one is like Table_A as it is now, except that it includes an additional identity column, and is clustered on that column. Assuming there’s an existing clustered index on Table_A, recreate that as a non-clustered index on Table_C. Depending on how Table_A is updated, you may need additional columns to track what updates occur.
  3. Create a trigger on Table_A. This trigger duplicates all changes in Table_A to Table_C.
  4. Looping via a suitable batch size for your environment, write all rows from Table_A to Table_B.
  5. Looping again, write all rows from Table_C to Table_B (taking into account the appropriate insert/update/delete logic for your situation). Note where you stopped with Table_C, the “high water mark” for that identity column.
  6. Call sp_rename to change Table_A to Table_D, then again to change Table_B to Table_A.
  7. From the high water mark, write from Table_C to the newly-renamed Table_A.
  8. My favorite part: Drop Table_C, Table_D, and the trigger.

There are caveats to this method, of course, but they have been acceptable for my situations. They are:

  • The table is unavailable (non-existent, really) between the two renames in Step 6. This is an extremely brief window, but it does need to occur. Also, in order to apply a trigger, the table needs to be briefly locked, which may present a problem.
  • Step 7 is present so that changes between Steps 5 and 6 are carried over to the new table. However, these can occur after the new table is active following Step 6, meaning that the following scenario is possible:
    • A row is updated in Table_A, and the change is carried over to Table_C.
    • The renames occur.
    • The same row is updated in the new Table_A.
    • Th second change is overwritten with the first change.

If the table you wish to migrate has a considerable number of updates and deletes, then this solution may present a data integrity problem. However, for insert-heavy tables, it has proven to work very well. Also, a second trigger on the new table and some additional logic could circumvent the second issue I described, depending on your situation.

So there you go – a use for triggers.

Godspeed!

Running PowerShell files in SQL Operations Studio

As I’ve used SQL Operations Studio more and more, I’ve also been finally using PowerShell in more situations. Given that I like the editor and that there’s a built-in terminal, I’ve been running those in my Ops Studio instance. But for a while I didn’t have a slick way of running an entire PowerShell file in the terminal. Usually, I’d just Ctrl+A/Ctrl+C/Ctrl+V, which is a bit awkward.

But among all the other ways you can customize Ops Studio, you have a lot of control over the key mappings. One way to edit these mappings is to pull up the Command Pallette (Ctrl+Shift+P) and start typing “key”, and you’ll see “Preferences: Open Keyboard Shortcuts”. You’ll also see it mentions the Ctrl+K/Ctrl+S shortcut. This will bring you to the basic Keyboard Shortcuts window, where you’ll need to click “keybindings.json”. Either way, just like Ops Studio’s overall settings (and VSCode’s, for that matter), you get a JSON file you can now tweak. Actually, two of them, with the defaults on the left and your own settings on the right.

Here’s the mapping I’ve added:


{
"key": "shift+f5",
"command": "workbench.action.terminal.runActiveFile",
"when": "editorTextFocus && editorLangId == powershell"
}

The effect of this is that pressing Shift+F5 while focus is on your PowerShell script file will cause Ops Studio to run that file in the terminal window. As an old SSMS and Visual Studio user, F5 seemed natural to me, and I noticed that Shift+F5 wasn’t already taken.

Note that this runs the file, not necessarily what you have in the window. So you may want to precede this with Ctrl+S while you’re working.

My gratitude to ck (twitter|blog) for pointing me in the right direction here. When it comes to either Ops Studio or PowerShell these days, I just assume he’s smarter than me.

Tigers in the Rain

This is the first in what may become a series of anecdotal posts about what I’ve learned as a coach and how I think it translates to becoming a good manager…

After the match, my brother told me he knew how it was going to go when he saw them in warm-ups. The other team, he said, looked like they didn’t want to be there, shuffling around half-heartedly in the drizzle. In contrast, our girls were all smiles.

They’d been there before.

During the previous season, the TC Tigers hosted Shelbyville, a larger nearby school and a fierce rival. The rain came down in buckets that night. It was their last season on the grass, and the field held up surprisingly well. So did the girls, who beat the Golden Bears 3-1.

I’d never seen a match played in those conditions… until that night at the 2017 Sectional championship. At one point, they had to call the match on account of lightning, and the teams were sent off to the locker rooms to wait it out. But it eventually resumed, and so did the dogfight. Heritage Christian may have come into the game as heavy favorites, but our girls held their own and held the score 0-0 through regulation and two overtimes.

Then they won the shootout. And the hardware.

When he shared his observation with me afterward, I realized the team had achieved something else as well. Not only could they play well in the rain, they believed they could play well in the rain. And I also realized that it’s up to me to keep that belief alive.

We’re preparing them for a new season now. Only seven of the girls on the current roster were part of that Shelbyville game. That doesn’t matter, though. I’ve brought up those rain matches a couple of times in the off-season, when I’ve had a mix of those who were there and those who weren’t. I’d get them going, and then let the veterans reminisce about what it was like to play in those matches and to come out on top. And let the rookies soak it in.

The girls are going to win some and lose some. That’s the nature of the game. But if it rains during a match this season… well, I almost feel sorry for the other team. Are they that much better than every other team in the rain? Maybe not. But they believe they are that much better – even the girls who weren’t at those two matches.

A development team won’t ever be asked to write code outside in the rain. But they’ll have their own rain matches. A production outage, a performance issue, an angry customer… who knows what the situation will be? But there will inevitably be something – some bugaboo that they’ll be able to overcome once or twice. It’s up to the manager to spot those successes and capitalize on them. It’s up to the manager to nudge the team toward embracing them and allowing them to become part of its identity. It might take nothing more than a “Remember when…” at a team lunch. Whatever it takes, though, the important thing is to gently – imperceptibly – encourage them to believe in themselves, and in each other. Especially when the rain comes.

tsqllint

My PASS local group, IndyPASS, has its monthly meeting tonight. At my insistence, first-time presenter Nathan Boyd is showing off a SQL tool called tsqllint. Nathan, a coworker of mine at Salesforce, is the leading developer behind this GitHub project.

A lint (or linter), if you didn’t know, “analyzes source code to flag programming errors, bugs, stylistic errors, and suspicious constructs” (wikipedia). This one is designed specifically for T-SQL, is highly configurable, and includes a Visual Studio Code extension. What more could you want, right? If you want cleaner T-SQL code out of your developers, with less hassle on the part of your reviewers, it’s definitely worth your time.

If you’re in the area, keep in mind there’s a location change tonight. While IndyPASS usually meets at Virtusa, 1401 North Meridian (formerly Apparatus), this month’s meeting is at Moser Consulting in Castleton. As usual, doors open at 5:30pm, and we’ll turn it over to Nathan by about 6:15pm.

T-SQL Tuesday #104

My thanks to Bert Wagner and his chosen topic for T-SQL Tuesday, Code You Would Hate To Live Without. It was just enough of an excuse to dust off the cobwebs here and get back to posting.

Anyway, since half of my time is spent in C#, I thought I’d venture into that world for my response. I’ll share a couple of common extensions that I include in most of my projects. Extensions, as their name implies, extend the functionality of existing objects. Here is a code snippet with a couple of extensions I typically add:

namespace myproj.Extension
{
  public static class Extensions
  {
    public static bool In(this T val, params T[] values) where T : struct
    {
      return ((System.Collections.Generic.IList)values).Contains(val);
    }

    public static object ToDbNull(this object val)
    {
      return val ?? System.DBNull.Value;
    }

    public static object FromDbNull(this object val)
    {
      return val == System.DBNull.Value ? null : val;
    }
  }
}

The first method enables me to easily search enumerations for a given value. For example, if I’ve defined this enumeration:

namespace myRacingProject.Enum
{
  public enum Series
  {
    None = 0,
    Indycar = 1,
    IndyLights = 2,
    ProMazda = 3,
    Usf2000 = 4
  }
}

Then I could use the extension like this:

if (mySeries.In(Enum.Series.ProMazda, Enum.Series.Usf2000)) myChassis = "Tatuus";

As for the other two methods, well… When is a null not a null? When it’s a System.DBNull.Value, of course! SQL Server pros who have spent any time in the .NET Framework will recognize this awkwardness:

var p = new System.Data.SqlClient.SqlParameter("@myParam", System.Data.SqlDbType.Int);
p.Value = (object)myVar ?? System.DBNull.Value;

With the extension, the second line becomes:

p.Value = mVar.ToDbNull();

Similarly, when reading, this:

var myInt = (int?)(myDataRow[myIndex] == System.DBNull.Value ? null : myDataRow[myIndex]);

Becomes this:

var myInt = (int?)myDataRow[myIndex].FromDbNull();

They’re not earth-shattering improvements, but my real point is that extensions are an often-overlooked feature that can improve your quality of life as a developer. Anytime you find yourself writing the same bit of code over and over, especially if that bit is rather unsightly, you might consider making it an extension.

Want to know more? Here ya go: https://docs.microsoft.com/en-us/dotnet/csharp/programming-guide/classes-and-structs/extension-methods

The 2017 Summit

As I adjust poorly to the three-hour time difference in Seattle this morning, I thought I’d share a few highlights of this week that are personal to me:

The Panel Sessions

I had a hand in bringing three panel sessions to the Summit this year, on Data Visualization, PowerShell, and Data Systems at Scale. I’m really looking forward to the discussions that emerge from these. If you plan on attending them, keep in mind that we’re using the #visualization, #powershellhelp, and #scale channels on sqlcommunity.slack.com as a way to take questions. Feel free to join the conversation there.

My Own Survey

There are plenty of ways to provide feedback to the people behind the Summit. But as I become more and more involved as a program committee manager, I want to give attendees a more direct, personal, and open-ended way to share your opinions. I will caution you, though – this is my own initiative and not part of PASS. I intend to use this feedback for the benefit of PASS and the Summit, but I make no promises about what I share and with whom. You’ll just have to trust me. And if you don’t? Hey, don’t fill out the survey! But if you want to help me help make the Summit better, here it is: Take My Survey

And thanks for your help!

My Own Session

It’s almost been an afterthought, but yeah, I’m presenting a session at the Summit this year. I didn’t quite plan on it, but I’m glad it’s happening. It’s targeted at developers, and – like nearly every session I present – it’s focused on making apps faster. So, if you’re interested in app performance, and you’re still around for the very last session of the conference, come see me talk about the .NET Framework’s SqlClient namespace in room 2AB!

PASS Local Groups

I’m speaking tomorrow evening at Michiana PASS. What’s my topic? Performance, of course! Actually, it’s about querying partitioned tables. Quickly. Because that’s how I roll.

But that’s not my point today.

Like any good technical community, the success of PASS doesn’t come from the top. Local Groups are the heart of PASS. If SQL Server is relevant to your job, become a part of your PASS Local Group. Or start one, if necessary. Share what you know, encourage your peers, and contribute to the community.

And if public speaking is something you wish you could do better, presenting in front of your Local Group is a safe way to dip your toe in those waters. I present a technical session about once a year to IndyPASS. In fact, I’ll be back in September. I’m also now starting to branch out to other Local Groups, like Michiana, to help them grow. As much as I enjoy taking part in SQLSaturdays, I think I might like this more.

Godspeed,

J

#PASS24HOP

In the past couple of years, I’ve had the opportunity to speak at a few SQLSaturdays, and presented to my local chapter, IndyPASS, a couple of times. I gained a new experience a few weeks ago, one that I had no idea would be so timely. I presented one of my sessions to YooperPASS, a new chapter in the upper peninsula of Michigan. This session was my first online session for PASS. I’ve presented online before at Salesforce, but that was to an internal audience. This was a new session, presented for the first time, to a bunch of strangers. Fortunately, the YooperPASS folks are very welcoming, and I think it went very well.

I mentioned this was timely, though, didn’t I? Well, that’s because I’ll be giving my second ever PASS virtual session on Thursday, the 20th. I’m a part of the 24 Hours of PASS: Summit Preview 2017! The same session I gave to YooperPASS, A Guided Tour of the SqlClient Namespace, will also be part of this webinar event. Because I’ll be presenting this session to a much wider audience now, I’d like to expand on my approach to this session.

First, I call it a “Guided Tour” because I’m cherry-picking those areas that I think deserve some special attention. A more comprehensive session would be impractical. So I had to cut things off somewhere. What made the cut are four topics that have valuable specific takeaways. A good technical session should give attendees something today they can put into practice tomorrow. I’d like to think I accomplished that with these topics.

Second, my choice of topics also gives me a chance to get across a broader message about how to scale ADO.NET code. Performance is always a focus of mine, and my experience has taught me that there is frequently a trade-off between application code and the database. One piece of code may perform better than another under a microscope, but its effect on overall system performance may actually be worse. In this session, I try to draw attention to this trade-off.

Finally, this is my attempt at an introductory session on an often-overlooked topic. Let’s face it, ADO.NET has been around a while. Because of that, I think the information available on it is inconsistent. StackOverflow didn’t exist until long after ADO.NET was introduced. Yes, there are some fine older resources on ADO.NET, but enterprises have changed a lot since many of them were written. Most current resources focus on Entity Framework, which is arguably just an abstraction layer over ADO.NET. So my session is intended to close the gap.

Anyway, I’m looking forward to being a part of this 24 Hours of PASS. If you tune into my hour, I hope it’s worth your time.

Godspeed,

J

T-SQL Tuesday, Late to the DevOps Party

This is what I get for slacking off on my blogging…


So, I totally missed this month’s T-SQL Tuesday. Which is a shame, because it’s an area I have a lot to talk about. Grant Fritchey hosted T-SQL Tuesday #91, which is all about DevOps. There were a lot of good posts. I liked Rob Farley’s comment: “DevOps is the idea of having a development story which improves operations.” Andy Yun had a nice take, closing with “This is what I believe DevOps is all about. The tools and processes being pioneered today help all of us build better, more stable software, which is better for all of us.

My own personal experience with what I think of as DevOps took a big leap about a year and a half ago when I moved from a product group in my organization to a development-oriented team inside our operations group. That transition both confirmed and challenged an observation I’d made years ago.

My observation was that organizations are all on a track between two extremes. On the one extreme is the legendary “two guys in a garage” story. On the other is a Fortune 500 conglomerate, doing business in tightly-regulated industries. In that garage, Woz was free to design and innovate at will, and what he produced is now the stuff of legend. Apple, on the other hand, delivers a new product only after a monumental expense of time and resources. They still produce remarkable products – one cannot argue with the massive success of the iPhone and its successors. But how those products come to market is a far cry from that Homebrew Computer Club meeting in ’76.

The critical piece of this observation is that all organizations move from the former end of the extreme to the latter. They may move at different rates, even coming to a stop for any length of time, but they never go the other direction. Let me repeat that: They never go the other direction.

Let’s think about database backups as an example. Garage organizations may not do any backups at all. Then something gets lost, and perhaps a weekly full backup task is scheduled. The system grows and taking a simple full backup once a week no longer scales properly, so a better schedule is created, with more particulars about what is backed up, when, and how. This process keeps getting refined, until, one day, there is a standard procedure for performing backups, including off-site storage, regular testing of the restoration process, and all kinds of other aspects that operations people in Fortune 500 companies have to think about.

Many of us are quick to claim that the Garage is better. We love the folklore around how Apple (and other companies like it) got started. We love to reminisce about those times that we turned caffeine into code and all was right with the world. It’s just something about being a coder. But that’s not universally true. Would your organization be better off without a good backup restoration procedure?

The problem is that – good or bad – the processes pile on. Bad or obsolete procedures never get removed, just marginalized until they become an indistinguishable part of the Fortune 500’s ecosystem.

I help administer a database that does not need to be backed up. Yes, that’s correct. It does not need to be backed up. Ever. It’s a data warehouse, of sorts, that is populated by a convoluted ETL process originating from log files. Due to both size and the diminishing value of old data, we only keep data in this database for a short time. If we were to lose the database, we would recreate the structures, and then “re-dump” the log files into the ETL source folder, and let the process churn through them again. That method of recovery would only take marginally more time than restoring from backup. By the way, for this database, the backup process itself is a serious drain on system resources.

Getting our operations group to STOP backing up this database was an adventure, because of all of the procedures in place to ensure that every database was properly backed up. At one point, because it was causing a production issue, I disabled the backup job (thereby curing the immediate issue). For my quick thinking, I was rewarded with having my SQL Agent rights revoked.

Anyway… the moral of my story is that the situation gets more complex over time. Once again, it never gets simpler. At least, that was my theory.

Enter DevOps.

To me, this is precisely the point of DevOps: The ability to go backwards on this track.

Now, here’s the hard part. To do this – to move towards the Garage end of the track – takes people who understand what is important and what isn’t. You can’t trim excess process at whim. You have to know what should get trimmed and what shouldn’t. Because you’re removing processes that protect the organization and limit risk, you need smart and organized people who will do the right thing without a process directing them.

DevOps is all about the people. It takes good communication. In his T-SQL Tuesday post, John Morehouse emphasized it with his call to “Start talking.” David Alcock joked about Dev and Ops going to “couples therapy” (and did indeed make a good point). But it’s not just good communication. It’s good people. For DevOps to work, DevOps people must be smart about what they do and don’t do. And then, if they’re successful, they’ll do what I didn’t think was possible – move from the Fortune 500 end of that track back towards the Garage.

As long as they don’t get their SQL Agent rights revoked in the process.

What are you known for?

As I mentioned before, I’ll be touching on a few marketing topics over the course of the next several weeks. Taking a top-down approach, I’m starting with the question, “What are you known for?”

I hang around the IndyCar paddock whenever I can, which isn’t nearly as often as I’d like. At the St. Pete race one year, I remember listening to one of the team PR people talking about how to get into her line of work. I think it might have actually been a Pirelli World Challenge team rep, Kelly Brouillet. She said something like, “Do one thing really well. If I have a go-to person for one thing, that’s worth a lot more to me than someone who knows a little about a lot of stuff.”

She’s right. As I’ve progressed in my career, I’ve noted who the experts are. I tried to do something in PowerShell a little while back, and turned to Mike Fal when I got stuck. When I’m trying to figure out the right direction to go with a columnstore issue, I look up what Niko Neugebauer has to say on it. The reason I was listening to Kelly that day in St. Pete is because she’s a lot more savvy about marketing and PR in the racing world than I am.

Brent Ozar, one of those high on my experts list, had a session at the Summit last year, on the Professional Development track. He called it a “500-Level Guide to Career Internals” but it was really a wonderfully personal “How did I get here?” talk. One of the things he stressed was a consistency in message:

  • His Twitter account says: “I make Microsoft SQL Server faster and more reliable.”
  • His blog says: “I co-founded Brent Ozar Unlimited, a boutique consulting firm that helps you make SQL Server faster and more reliable.”
  • His 2016 PASS Summit bio says: “Brent Ozar loves to make SQL Server faster and more reliable.”

See a trend? Being known for making SQL Server faster and more reliable is Brent’s desired brand identity. BrentOzar.com is where I go when I have general SQL Server performance or reliability questions. It’s often the first place I go. Once I gain a little insight there, I might dig deeper elsewhere. Brent is very good about pointing people to experts in the field, who have a deeper but more narrow focus than he does. As for myself, you’ll find the phrase “fast and efficient SQL code” peppered throughout my online activity. I’m competent at quite a few things, but I’d like to think I really shine at that.

As the communications director for the IndyCar Ministry, it’s my role to get the message out that we are “dedicated to providing spiritual support and counseling to the IndyCar Series drivers, teams, and staff.” But what are we known for? That’s a tough question, actually, and one I’ve been wrestling with. “Being there when needed,” has come to mind. I’ve often joked that IndyCar is a traveling circus. The paddock is very close-knit, and rightfully so. Having chaplains who are inside that paddock, not just logistically, but emotionally as well – that means a lot. It makes the difference between whether or not someone will open up about a problem in their life. In any case, I’m still working on this one. In part, it’s actually what prompted this post.

In a similar capacity, I help Blue River Soccer “provide recreational, competitive, and educational soccer opportunities for the youth of Shelby County, Indiana.” Our tagline, which plays on what we’re known for, is “Ignite the dream. Launch the evolution. Respect the match.” We recognize that soccer is still in a transition period, at least in our area. So we set our sights accordingly. We try to give kids a passion for the game, give them the fundamentals, and set them up to move the needle forward – be better prepared to pass the game on to their own kids. Frankly, we’re not one of those associations with thousands of kids, paid coaches, and a big-time travel program. We’re known for being a place to which parents can feel comfortable bringing their kids, regardless of what they know about the game coming in.

It all comes back to that question. If you’re promoting something – yourself, your company, your product or services – you have to answer that first. So… what are you known for?