"how to properly manage database deployment with ssdt and visual studio 2012 database projects?" Code Answer


i've been working on this myself, and i can tell you it's not easy.

first, to address the reply by jt - you cannot dismiss "versions", even with declarative updating mechanics that ssdt has. ssdt does a "pretty decent" job (provided you know all the switches and gotchas) of moving any source schema to any target schema, and it's true that this doesn't require verioning per se, but it has no idea how to manage "data motion" (at least not that i can see!). so, just like dbproj, you left to your own devices in pre/post scripts. because the data motion scripts depend on a known start and end schema state, you cannot avoid versioning the db. the "data motion" scripts, therefore, must be applied to a versioned snapshot of the schema, which means you cannot arbitrarily update a db from v1 to v8 and expect the data motion scripts v2 to v8 to work (presumably, you wouldn't need a v1 data motion script).

sadly, i can't see any mechanism in ssdt publishing that allows me to handle this scenario in an integrated way. that means you'll have to add your own scafolding.

the first trick is to track versions within the database (and ssdt project). i started using a trick in dbproj, and brought it over to ssdt, and after doing some research, it turns out that others are using this too. you can apply a db extended property to the database itself (call it "buildversion" or "appversion" or something like that), and apply the version value to it. you can then capture this extended property in the ssdt project itself, and ssdt will add it as a script (you can then check the publish option that includes extended properties). i then use sqlcmd variables to identify the source and target versions being applied in the current pass. once you identify the delta of versions between the source (project snapshot) and target (target db about to be updated), you can find all the snapshots that need to be applied. sadly, this is tricky to do from inside the ssdt deployment, and you'll probably have to move it to the build or deployment pipeline (we use tfs automated deployments and have custom actions to do this).

the next hurdle is to keep snapshots of the schema with their associated data motion scripts. in this case, it helps to make the scripts as idempotent as possible (meaning, you can rerun the scripts without any ill side-effects). it helps to split scripts that can safely be rerun from scripts that must be executed one time only. we're doing the same thing with static reference data (dictionary or lookup tables) - in other words, we have a library of merge scripts (one per table) that keep the reference data in sync, and these scripts are included in the post-deployment scripts (via the sqlcmd :r command). the important thing to note here is that you must execute them in the correct order in case any of these reference tables have fk references to each other. we include them in the main post-deploy script in order, and it helps that we created a tool that generates these scripts for us - it also resolves dependency order. we run this generation tool at the close of a "version" to capture the current state of the static reference data. all your other data motion scripts are basically going to be special-case and most likely will be single-use only. in that case, you can do one of two things: you can use an if statement against the db build/app version, or you can wipe out the 1 time scripts after creating each snapshot package.

it helps to remember that ssdt will disable fk check constraints and only re-enable them after the post-deployment scripts run. this gives you a chance to populate new non-null fields, for example (by the way, you have to enable the option to generate temporary "smart" defaults for non-null columns to make this work). however, fk check constraints are only disabled for tables that ssdt is recreating because of a schema change. for other cases, you are responsible for ensuring that data motion scripts run in the proper order to avoid check constraints complaints (or you manually have disable/re-enable them in your scripts).

dacpac can help you because dacpac is essentially a snapshot. it will contain several xml files describing the schema (similar to the build output of the project), but frozen in time at the moment you create it. you can then use sqlpackage.exe or the deploy provider to publish that package snapshot. i haven't quite figured out how to use the dacpac versioning, because it's more tied to "registered" data apps, so we're stuck with our own versioning scheme, but we do put our own version info into the dacpac filename.

i wish i had a more conclusive and exhasutive example to provide, but we're still working out the issues here too.

one thing that really sucks about ssdt is that unlike dbproj, it's currently not extensible. although it does a much better job than dbproj at a lot of different things, you can't override its default behavior unless you can find some method inside of pre/post scripts of getting around a problem. one of the issues we're trying to resolve right now is that the default method of recreating a table for updates (ccdr) really stinks when you have tens of millions of records.

-update: i haven't seen this post in some time, but apparently it's been active lately, so i thought i'd add a couple of important notes: if you are using vs2012, the june 2013 release of ssdt now has a data comparison tool built-in, and also provides extensibility points - that is to say, you can now include build contributors and deployment plan modifiers for the project.

By arifin4web on January 19 2022

Answers related to “how to properly manage database deployment with ssdt and visual studio 2012 database projects?”

Only authorized users can answer the Search term. Please sign in first, or register a free account.