The list of extensions for SQL Ops Studio is still pretty small. But since it’s a fork of VSCode, it’s possible that any given VSCode extension just might work fine in Ops Studio. If that’s the case, here’s what you can do.
- Go to the VSCode Marketplace and find the extension you want. In my case, I’m adding the tsqllint extension that I mentioned in a previous post.
- Under the Resources heading on the extension’s Marketplace page, you’ll see a link to “Download Extension”. This is a .vsix file that you’ll want to save to your local system.
- In Ops Studio, under the File menu, click “Install Extension from VSIX Package”. Pick the file you downloaded. Once it’s installed, Ops Studio will prompt you to Reload for the new extension to take effect.
And that’s all there is to it! Once again, I’m indebted to ck (twitter|blog) for finding this.
If you try this with an extension and run into a problem, I’d like to hear what it was. I’m curious about which extensions don’t carry over well, and why they have a problem.
This is a response to T-SQL Tuesday #106, Trigger Headaches or Happiness, by Steve Jones.
I can only recall one time in the past several years (at least a decade) that I’ve found triggers to be useful. It involves data migration.
The problem: You have a massive, high-activity table. Let’s call this Table_A. You need to make significant changes to it. For example, the clustered index needs to change. How do you accomplish this?
The solution: Create a script that does the following:
- Create a second table, which we’ll call Table_B. This one will eventually become the new Table_A, so design it with your changes in mind.
- Create a third table, which we’ll call Table_C. This one is like Table_A as it is now, except that it includes an additional identity column, and is clustered on that column. Assuming there’s an existing clustered index on Table_A, recreate that as a non-clustered index on Table_C. Depending on how Table_A is updated, you may need additional columns to track what updates occur.
- Create a trigger on Table_A. This trigger duplicates all changes in Table_A to Table_C.
- Looping via a suitable batch size for your environment, write all rows from Table_A to Table_B.
- Looping again, write all rows from Table_C to Table_B (taking into account the appropriate insert/update/delete logic for your situation). Note where you stopped with Table_C, the “high water mark” for that identity column.
- Call sp_rename to change Table_A to Table_D, then again to change Table_B to Table_A.
- From the high water mark, write from Table_C to the newly-renamed Table_A.
- My favorite part: Drop Table_C, Table_D, and the trigger.
There are caveats to this method, of course, but they have been acceptable for my situations. They are:
- The table is unavailable (non-existent, really) between the two renames in Step 6. This is an extremely brief window, but it does need to occur. Also, in order to apply a trigger, the table needs to be briefly locked, which may present a problem.
- Step 7 is present so that changes between Steps 5 and 6 are carried over to the new table. However, these can occur after the new table is active following Step 6, meaning that the following scenario is possible:
- A row is updated in Table_A, and the change is carried over to Table_C.
- The renames occur.
- The same row is updated in the new Table_A.
- Th second change is overwritten with the first change.
If the table you wish to migrate has a considerable number of updates and deletes, then this solution may present a data integrity problem. However, for insert-heavy tables, it has proven to work very well. Also, a second trigger on the new table and some additional logic could circumvent the second issue I described, depending on your situation.
So there you go – a use for triggers.