dbops v0.5 – new features

Yet another release of dbops Powershell module is finally out in the wild! A fresh v0.5.1 version is available now and can be downloaded from PSGallery and GitHub. A brief list of features:

  • MySQL support
  • PostgreSQL support
  • Cross-platform ad-hoc query command: Invoke-DBOQuery
  • Full Linux & PS Core support
    • Compatible with .Net Standard 2.0
    • Module is now tested on both Windows and Linux
  • Missing connectivity libraries can now be downloaded and installed interactively
  • Core deployment libraries – DbUp – were upgraded to v.4.2.0
  • Hundreds of new tests for every supported RDBMS:
    • SQL Server
    • Oracle
    • MySQL
    • PostgreSQL
  • New connection string parser to support all database types that allows for deep customization

Ad-hoc queries via Invoke-DBOQuery

New function Invoke-DBOQuery provides a cross-platform querying experience for all four supported RDBMS. Some of the most notable features:

  • Highly customizable connection parameters
  • Running queries from files or variables
  • JSON configuration files and native dbops configs are supported
  • Interactive console: -Interactive (or simply -i)

Some real-life examples:

# Get current database from a PostgreSQL server
Invoke-DBOQuery -Type PostgreSQL -Server localhost:5432 -Database postgres -Query 'SELECT current_database()' -As SingleValue
# Run a query against MySQL database
Invoke-DBOQuery -Type MySQL -Server localhost -Database mysql -Query 'SELECT Name from Users'
# Run multiple files from a folder in a specific SQL Server database
Get-ChildItem .\Scripts | Invoke-DBOQuery -SqlInstance 'SqlServer1:1443' -Database MyDB
# Connect to an Oracle database interactively as SYSDBA
Invoke-DBOQuery -Type Oracle -SqlInstance localhost -Interactive -Username sys -Password $mySecurePassword -ConnectionAttribute @{ 'DBA Privilege' = 'SYSDBA' }

MySQL and PostgreSQL support

The module now fully supports interactions with MySQL and PostgreSQL, as well as continues to support SQL Server and Oracle. You can choose which connection library to use by specifying the -Type parameter.

# install a DBOps package to the MySQL server
Install-DBOPackage -Type MySQL -Server localhost -Database mydb -Path .\package.zip
# invoke a query against a remote PostgreSQL server
Invoke-DBOQuery -Type PostgreSQL -SqlInstance pgsql1:5432 -Database mydb -Query 'SELECT current_database();'
# change the default connection -Type (SQLServer) to PostgreSQL:
Set-DBODefaultSetting -Name rdbms.type -Value 'PostgreSQL'

Linux & PS Core support

The module is fully compatible with Linux (and probably MacOS? I have no way to verify) and can be used on Powershell Core 6.1 and higher.

All the existing unit and integration tests now run on both Linux and Windows.

Automating SQL Server patching

Howdy everyone, today I’m gonna talk about SQL Server patching.

While this operation is pretty straightforward and very rarely causes issues by itself, there is nothing less entertaining than going from server to server, RDP to RDP, launching the same executable over and over. Which opens an automation opportunity for us!

Let me introduce you a new tool from the dbatools toolset: Update-DbaInstance. It will take care of SQL Server patching – and all the associated with it activities. It supports updating to latest or specific version of SQL Server, allows you to restart the computer after the upgrade is finished, and, most importantly, you can run it against multiple servers in parallel.

Prerequisites

Elevated session

This command is designed to be run in an elevated session. It is a must for the local installations, when you’re running the command against a localhost, but, strangely enough, it’s also required when you’re running it against remote machines and you patch repository is on a network drive (which I’m guessing is the case 99% of the time). Why? See the next prerequisite:

Powershell remoting and Credentials

Unsurprisingly, performing remote patching through Powershell requires Powershell remoting to be already configured. There is one more requirement, though, for running sessions remotely: if your patch repository is a network folder, there is no way default remoting session would be able to get them. It would run into the infamous double-hop issue and fail to delegate your current credentials when accessing that folder.

The solution to that is to pass credentials to the remote server, which would authenticate you on that remote server properly. This is why, specifying explicit -Credential parameter will help you in such scenarios. In such doing, you are implicitly switching to the CredSSP authentication protocol which allows you to securely pass credentials to the remote host.

Using CredSSP protocol without configuring it

The command will attempt then to configure CredSSP protocol on both client and server – this is why the elevated session is required even on the client machine. Such dynamic connection configuration is only supported on Powershell 3.0 or higher: both client and server should comply with this requirement.

Properly configured CredSSP with Credentials passed through the remote session

If for any reason, CredSSP configuration and/or connection has failed, the command will ask you if you want to try using a less secure protocol (Default) to pass credentials to the remote server.

With all that said, if the server is able to reach the patch file without authentication, you won’t need to specify credentials at all.

Patch repository

You’ll need to have you SQL Server update files stored somewhere where the server can reach them: a network drive or a local folder. The command would find relevant KB based on execution parameters and look for the file name recursively in specified path(s).

Important thing about the patch repository is that all the KB files should maintain their original naming, which in all cases follows a predefined file name template:

SQLServer*-KB*-*x*.exe

Having such file stored in the folder will ensure that the update command finds it among the files in the folder, but keep in mind that folders with huge number of files might slow down the search. You can specify this folder by using the -Path parameter.

Executing the command

There are a few ways you can execute the command and most of them are covered in the examples you can get from running Get-Help Update-DbaInstance -Examples

Never forget about the power of Get-Help

Some of the features worth mentioning:

  • By default the command will try to update the server to the latest-and-greatest service pack and cumulative update
  • You can specify -Type to choose between installing service packs, cumulative updates or both
  • -Restart will restart the computer after the installation is successful. Very helpful when installing multiple updates at once – each of them will require a restart. It will also restart the computer prior to installing the updates in case a pending restart is already queued in the system.
  • -Version will allow you to target a specific version of SQL Server on the machine, as well as define the update level you want to reach. Some of the examples:
    • SQL2008SP4 – upgrade all SQL2008 instances to SP4
    • 2012SP3CU11 – upgrade all SQL2012 instances to SP3 CU11
    • 2016 – upgrade all SQL2016 instances to the latest version available
    • SP4 – upgrade all instances found on the computer to SP4, regardless of their version
  • Specifying more than one computer in -ComputerName will initiate a parallel update once all the update paths have been discovered and confirmed
  • This command is considered as a high-risk command, which means it would ask you to confirm each update path before executing it. To disable that behavior use -Confirm:$false
  • It can be a good idea to run the command in the -WhatIf mode first to assess the impact of it.

A simple execution of the command will go through the following phases:

  • Search for existing installations of SQL Server
  • Check if there are any eligible updates for each chosen installation
  • Configure CredSSP for remote connections
  • Search for the required SQL Server KB file(s) in the repository path defined in -Path
  • Ask user’s permission to continue with the installation
  • Extract and install the update(s): service pack, then cumulative update.
  • Restart the computer if needed and -Restart is specified
Update process in all its glory

Conclusion

This command is designed to turn a mundane patching process into a simple task, which now can be performed often enough to keep up with the pace of SQL Server cumulative update releases. Hope you’ll find a use for it in upcoming 2019.

Merry Christmas and Happy New Year!

Using dacpac packages to clone SQL Server databases

Many SQL Server DBAs know that there is a very convenient way of delivering a full copy of the database structure to a different SQL Server: it’s called dacpac, or a Data-tier Application package. You can extract the database any time using SQL Server Management Studio context menu: Tasks -> Extract Data-tier Application, which would guide you through the extraction options and output a compressed package file that would contain all necessary schema definitions, as well as table data. This package can be further on deployed on a completely different server using a Deploy Data-tier Application context menu item.

What people do not widely know, however, is how much fine-tuning a simple deployment might require, and how difficult it is sometimes to force the deployment operation to behave in a certain way. Good thing, is that there is now a convenient way of automating this process using dacpac-related functions from the dbatools Powershell module.

This article would explain how to:

  • Extract dacpac from a database
  • Choose whether you want to include data or not
  • Deploy the dacpac package on any version of SQL Server higher than 2008 SP2
  • Exclude object types from the deployment process
  • Generate deployment scripts
  • Create a Jenkins job that will take care of this whole process for you

Extracting dacpac package

Let’s use this simple script to extract dacpac from an existing database:


Param (
$Server,
$Database,
$Path,
[switch]$IncludeData
)
# Prepare extraction properties string
$exportProperties = "/p:IgnorePermissions=True /p:IgnoreUserLoginMappings=True" # Ignore permissions definition
if ($IncludeData) {
$exportProperties += " /p:ExtractAllTableData=True" #Extract data
}
Export-DbaDacpac -SqlInstance $Server -Database $Database -Path C:\temp -ExtendedProperties $exportProperties

Export properties here are defined to control the extract process:

  • Ignore database permissions, as there is rarely a need to add permissions into the mix
  • Define an option to include data into the package

Export-DbaDacpac function will take care of the extraction process and will return all details about the extraction, including the .Path property that would direct us to the extracted file:

2018-08-24_15-01-10
output from Export-DbaDacpac

You can check all other available properties (which are basically command line parameters) in a corresponding article: https://msdn.microsoft.com/en-us/library/hh550080.aspx

Deploying dacpac package

There is one important note about dacpac deployment: it requires a so called Publish Profile for the deployment. Publish profile is an xml file that contains deployment parameters that you want to enable during the deployment. You can create such profile by using one of the following options:

  • Run New-DbaPublishProfile, specifying the connection parameters. The resulting file will enable you to run the deployment, but it is highly recommended that you modify it according to your needs
  • From Visual Studio SSDT project, select Project -> Publish. It would open a dialog, that allows you to load, save and modify the Publish profile.
  • Manually, using documentation from Publish parameters section of the same article

The parameters that we’re going to use during deployment are as follows:

  • AllowIncompatiblePlatform – would enable me to deploy to an earlier version of SQL Server.
  • CreateNewDatabase – will re-create the database every time
  • ExcludeObjectTypes – semicolon-separated list of objects that will be excluded from the deployment.
  • ExcludeLogins and ExcludeUsers – somehow, these two ended up being independent parameters, not included in the previously mentioned ExcludeObjectTypes. Excludes logins and/or users from the deployment.
  • IgnorePermissions – ignores object permissions during the deployment.
  • DropObjectsNotInSource – boolean value, which is True by default, that controls whether existing objects in the database are going to be dropped if they are not present in the package. This is a very important parameter if you are planning to deploy dacpac to a non-empty database.
  • IgnoreRoleMembership – ignores role membership

This is an example of the Publish profile file that we’re going to use in this exercise:


<?xml version="1.0" ?>
<Project ToolsVersion="14.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003"&gt;
<PropertyGroup>
<TargetConnectionString></TargetConnectionString>
<ProfileVersionNumber>1</ProfileVersionNumber>
<AllowIncompatiblePlatform>True</AllowIncompatiblePlatform>
<CreateNewDatabase>False</CreateNewDatabase>
<ExcludeObjectTypes>Permissions;RoleMembership</ExcludeObjectTypes>
<ExcludeLogins>True</ExcludeLogins>
<IgnorePermissions>True</IgnorePermissions>
<DropObjectsNotInSource>False</DropObjectsNotInSource>
<IgnoreUserSettingsObjects>True</IgnoreUserSettingsObjects>
<IgnoreLoginSids>True</IgnoreLoginSids>
<IgnoreRoleMembership>True</IgnoreRoleMembership>
</PropertyGroup>
</Project>

view raw

publish.xml

hosted with ❤ by GitHub

Once we have a Publish profile ready we can start the deployment by using Publish-DbaDacpac function, specifying:

  • SqlInstance – target server
  • Database – target database
  • Path – path to the dacpac package
  • PublishXml – path to the publish profile

2018-08-24_15-44-08.jpg
just like this

Other parameters of Publish-DbaDacpac that you might find useful:

  • GenerateDeploymentScript – will generate a deployment script prior to deployment
  • ScriptOnly – will not perform the deployment, generating a deployment script instead

Full script

This script below would extract the dacpac from an existing database and deploy it to a different server using the publish.xml file we created above. Feel free to modify it to your own liking!


[Cmdletbinding()]
Param (
$SourceServer,
$TargetServer = $SourceServer,
$SourceDatabaseName,
$TargetDatabaseName,
$Path,
$PublishXml = '.\publish.xml',
[switch]$IncludeData
)
#Stop on any error by default
$ErrorActionPreference = 'Stop'
# Construct export parameters
$exportProperties = "/p:IgnorePermissions=True /p:IgnoreUserLoginMappings=True"
if ($IncludeData) {
$exportProperties += " /p:ExtractAllTableData=True"
}
#Export database to path
Write-Verbose "Starting the export from $SourceServer.$SourceDatabaseName to $Path"
$exportFile = Export-DbaDacpac -SqlInstance $SourceServer -Database $SourceDatabaseName -Path $Path -ExtendedProperties $exportProperties -EnableException
Write-Verbose "Export completed`: $exportFile"
#publish dacpac with defined publish xml file
Write-Verbose "Starting the publication from $($exportFile.Path) to $TargetServer.$TargetDatabaseName"
$xml = (Get-Item $PublishXml).FullName
Publish-DbaDacpac -SqlInstance $TargetServer -Database $TargetDatabaseName -PublishXml $xml -Path $exportFile.Path -EnableException
#remove dacpac file
if (Test-Path $exportFile.Path) {
Write-Verbose "Removing dacpac file $($exportFile.Path)"
Remove-Item $exportFile.Path
}

Creating a Jenkins job

In order to create an interface and a scheduler at the same time, I would definitely recommend to make this scenario available on your Jenkins instance, same way we did it before in Refresh database & keep permissions – SQL Server Jenkins labs (Part 2). The parameters are going to be almost the same, so feel free to use examples from the Jenkins labs to create the job that will use instead a dacpac deployment script.

2018-08-24_15-55-30
sample Jenkins job

Conclusion

Dacpac packages are a convenient way of copying a full database when other methods are unavailable for whatever reason. It has some learning curve associated with it, but it can be a great help when used in an automated manner.

It is also a perfect workaround for database copying, when a source server has a higher SQL Server version than the target server: this will work even between 2017 and 2008, if all the objects are compatible with the target database version.

It also work pretty well as an alternative for a snapshot replication, being much less restrictive in its approach.

UPDATE: Updated the CreateNewDatabase parameter with a new description, apparently it recreates the database every time instead of just creating a new database if needed.

Refresh database & keep permissions – SQL Server Jenkins labs (Part 2)

Welcome back to the SQL Server Jenkins labs! In the first part of the labs we created a Jenkins job that restored a database based on historical data. This time, let’s try something very similar, but add some flavor to it.

We’re going to setup a job that copies a single database in its current state from one server to another. It is very similar to the previous lab exercise, except, this one will take care of much more things at once. Let’s add into the mix most of the activities that accompany the database restore, such as:

  • Changing database owner after restore is completed;
  • Cleaning up existing permissions on the restored database;
  • Restoring previous permissions, in case the database is being replaced with a fresh copy.

On top of that, let’s make some of them optional, increasing the flexibility of the solution.

Writing the code

Once again, there is no reason not to use dbatools commands for this purpose, similarly how most of us wouldn’t want to reinvent the wheel. The scenario is going to be quite simple:

  1. Define parameters – connections, database names, paths, etc.
  2. Backup the source database to a network share with COPY ONLY
  3. Export permissions from the target database if needed
  4. Restore the target database
  5. Change database owner
  6. Remove all users from the database if needed
  7. Re-apply old permissions if needed

The restore process, when it comes to replacing an existing database, is a very delicate operation, so let’s make sure that the code would stop whenever an error is occurred. This will be achieved by using -EnableException switch of the dbatools functions, that will throw an error instead of a simple warning, which is a default behavior. Let’s also define $ErrorActionPreference = 'Stop' to ensure that the script is not executed any further once the error has occurred. Lastly, it would be a good idea to save existing permissions to the disk drive in case a manual intervention is required.

Most of the output will be hidden in the Verbose stream, which allows us to control the amount of output we want to see on our end. Just add a -Verbose switch, if you need to see more!

Here is the script we are going to use this time. It has slightly more action inside, but it still consists of the same building blocks, most of which are regular dbatools commands. Let’s name it Copy-Database.ps1:

[Cmdletbinding()]
Param (
    [Parameter(Mandatory)]
    [string]$SourceServer,
    [Parameter(Mandatory)]
    [string]$SourceDatabase,
    [string]$TargetServer,
    [Parameter(Mandatory)]
    [string]$TargetDatabase,
    [Parameter(Mandatory)]
    [string]$BackupFilePath,
    [string]$DBOwner,
    [switch]$KeepPermissions,
    [switch]$DropUsers
)

#Stop on any error by default
$previousErrorActionPreference = $ErrorActionPreference
$ErrorActionPreference = 'Stop'

#Temporary permissions will be stored here
$permissionsFile = ".\Permissions-$TargetDatabase.sql"

#If target server is not specified, it's assumed that it's going to be the same server
if (!$TargetServer) { $TargetServer = $SourceServer }

if (Test-Path $BackupFilePath) {
    #Removing old temporary backup if it still exists for some reason
    Write-Verbose "Removing old backup file $BackupFilePath"
    Remove-Item $BackupFilePath
}
#Run copy-only backup
Write-Verbose "Initiating database backup`: $SourceServer.$SourceDatabase to $BackupFilePath"
$backup = Backup-DbaDatabase -SqlInstance $SourceServer -Database $SourceDatabase -BackupFileName $BackupFilePath -CopyOnly -CompressBackup -Checksum -EnableException

if (!$backup.BackupComplete) {
    throw "Backup to $BackupFilePath was not completed successfully on $SourceServer.$SourceDatabase"
}

#Record and store permissions
if ($KeepPermissions -or $DropUsers) {
    $permissions = Export-DbaUser -SqlInstance $TargetServer -Database $TargetDatabase
    Write-Verbose "Exported permissions from $TargetServer.$TargetDatabase`: $permissions"
    Write-Verbose "Storing permissions of $TargetServer.$TargetDatabase in a file $permissionsFile"
    $permissions | Out-File -FilePath $permissionsFile
}

#Restore the database on top of existing DB
Write-Verbose "Initiating database restore`: $BackupFilePath to $TargetServer.$TargetDatabase"
$backup | Restore-DbaDatabase -SqlInstance $TargetServer -DatabaseName $TargetDatabase -WithReplace -ReplaceDbNameInFile -EnableException

#Update database owner, stop throwing errors at this point, as further actions are not critical
$ErrorActionPreference = $previousErrorActionPreference

if ($DBOwner) {
    Write-Verbose "Updating database owner to $DBOwner"
    Set-DbaDatabaseOwner -SqlInstance $TargetServer -Database $TargetDatabase -TargetLogin $DBOwner
}

#Drop users if requested
if ($DropUsers) {
    $users = Get-DbaDatabaseUser -SqlInstance $TargetServer -Database $TargetDatabase -ExcludeSystemUser
    foreach ($user in $users) {
        Write-Verbose "Dropping user $($user.Name) from $TargetServer.$TargetDatabase"
        try {
            $user.Drop()
        }
        catch {
            # No need to throw, maybe a user owns a schema of its own
            Write-Warning -Message $_
        }
    }
}

#Restore permissions
if ($KeepPermissions) {
    Write-Verbose "Restoring permissions of $TargetServer.$TargetDatabase from a file $permissionsFile"
    Invoke-DbaSqlQuery -SqlInstance $TargetServer -Database $TargetDatabase -Query $permissions
}

#Remove backup file
if (Test-Path $BackupFilePath) {
    Write-Verbose "Removing backup file $BackupFilePath"
    Remove-Item $BackupFilePath
}

#Remove permissions file
if (Test-Path $permissionsFile) {
    Write-Verbose "Removing permissions file $permissionsFile"
    Remove-Item $permissionsFile
}

Creating a new Jenkins job

Configuration check

Let’s re-iterate on this and ensure that our Jenkins instance has all necessary tools to implement the job we have in mind.

Plugins list

The following Jenkins plugins are going to be used in this job:

Agent setup

Ensure that the Jenkins executor(s), which you intend to use, runs under an account that was granted with sysadmin privileges on both source and target SQL Servers. Make sure to confine the job to that particular executor(s) using labels.

Also let’s ensure that executor node(s) has the latest version of dbatools installed:

Install-Module dbatools

Finally, let’s copy our Copy-Database.ps1 script to the local folder using the same exact path across all the Jenkins nodes, and ensure that job executor has r/w access to that folder. If you want to use a remote folder (file share), make sure that your Execution Policy is set to RemoteSigned on each node: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned.

For advanced Jenkins users I would recommend using a Source Control Management repository to store your code and retrieve it using the SCM section in the job definition.

Creating the job

Finally, it’s time to create the job! Start with clicking ‘New’ item on the left-hand side of the Jenkins window:

2018-08-03_10-42-58

Input a new project name and make sure to choose a Freestyle project, which is a most generic Jenkins job type:

2018-08-03_10-45-07

In the job configuration window, set the project to be parameterized (click the corresponding checkbox). Add 4xString parameters (server and database names for both source and destination), and 2xBoolean parameters (those will represent our optional switches):

2018-08-03_10-48-22

Parameter name of each of the added parameters will become an environment variable, so make sure to not to use special characters in the parameter name.

2018-08-03_10-47-42

Provide a descriptive definition for the parameters:

  • String parameter 1
    • Name: SourceServer
    • Default value:
    • Description: Full server name of the source instance
    • Trim the string: Yes
  • String parameter 2
    • Name: SourceDatabase
    • Default value:
    • Description: Name of the source database to copy
    • Trim the string: Yes
  • String parameter 3
    • Name: TargetServer
    • Default value:
    • Description: Full server name of the target instance
    • Trim the string: Yes
  • String parameter 4
    • Name: TargetDatabase
    • Default value:
    • Description: Name of the target database. Will be replaced, if exists.
    • Trim the string: Yes
  • Boolean parameter 1
    • Name: KeepPermissions
    • Default value: Unchecked
    • Description: Check if you want to re-apply all the previously existed permissions on top of the restored database
  • Boolean parameter 2
    • Name: DropUsers
    • Default value: Unchecked
    • Description: Check if you want to drop all the users from the target database upon restore.

If you want this project to run only on specific agent nodes, click Restrict where this project can be run and provide space-separated list of labels in the field:

2018-08-03_11-00-22

Now we have our job parameterized and running only on specific nodes, the only thing left is to pass those parameters to the Powershell script. In the Build section click Add build step and select Windows PowerShell:

2018-08-03_12-00-12

In the Command section let’s put a call to our script file, that should be already accessible for the agent nodes, and pass all the parameters from your job:

& D:\Scripts\Copy-Database.ps1 -SourceServer $env:SourceServer `
  -TargetServer $env:TargetServer `
  -SourceDatabase $env:SourceDatabase `
  -TargetDatabase $env:TargetDatabase `
  -KeepPermissions:([bool]::Parse($env:KeepPermissions)) `
  -DropUsers:([bool]::Parse($env:DropUsers)) `
  -BackupFilePath "\\Some\Path\$($env:SourceDatabase).bak" `
  -DBOwner sa `
  -Verbose

Several things to note about this script:

  • $env:ServerName is a call for an environment variable, that was defined by the job parameter.
  • [bool]::Parse($env:KeepPermissions) is used to convert the job’s parameter to [bool]. All job parameter are strings in essence and boolean parameters will have one of the following values: ‘true‘ or ‘false‘. We’re simply converting them to [bool]  using a public method Parse of the [bool] class.
  • -DropUsers:([bool]::Parse($env:DropUsers)) is used to pass a boolean value to a switch parameter, which requires a colon notation.

Save the job by pressing the Save button on the bottom of the screen.

 Running the job

Let’s start the job now! Click Build with Parameters from the job overview screen:

2018-08-03_12-46-20.jpg

Input all the parameters and start the job:

2018-08-03_12-50-12.jpg

In the Build History section of the page, you will see the progress of the job. By clicking on it you would be able to see the details, such as Console Output:

2018-08-03_12-52-47

Conclusion

There are multiple ways of introducing automation to your environment, and Powershell is definitely one of them, yet, sometimes it’s hard for people to embrace automation without a simple UI that would help them to do the first step.

Jenkins is one of such tools that enables anyone to use automated solutions with little to no knowledge of the code that is executed behind the curtain. It can be just a frontend for your Powershell scripts, or it can become an ultimate tool in your hands that controls multiple aspects of your day-to-day tasks, and is integrated with your whole enterprise.

Cheers! And stay tuned – there is more to come!

Restore database job – SQL Server Jenkins labs (Part 1)

Hello there, and welcome to SQL Server Jenkins labs!

Some time ago I talked about using Jenkins as a robust scheduling tool, and now it is absolute time to follow up on that, and demonstrate some of the solutions that can be achieved by using a combination of Powershell, dbatools and Jenkins.

The first example to go down that pipe is going to be a simple backup restore job. The job is going to get backup history from the source server, build a proper restore sequence and perform the restore on a server of your choosing. The restore step would also:

  • Change file paths of the database files to a default data/log directory
  • Rename database files to match the new database name
  • Optionally, perform a point-in-time recovery

Preparations

First, let’s ensure that our Jenkins instance has all necessary tools to implement this job.

Plugins list

The following Jenkins plugins are going to be used in this job:

You can setup plugin by clicking Manage Jenkins on the left-hand side menu and choosing Manage Plugins in the list. On the PluginManager page, switch to the Available tab and use Filter textbox to search for the plugins you want to install.

2018-08-03_14-15-21.jpg

After you selected the plugins, click Install without restart. Most of the plugins do not require restart, however, some of them will not work properly until the whole Jenkins instance is restarted.

Agent setup

Ensure that the Jenkins executor(s), which you intend to use, runs under an account that was granted with sysadmin privileges on both source and target SQL Servers. Depending on your agent type, it can be configured in Manage Jenkins -> Manage Nodes (Windows service agent) or in the Services console on your server (Java agent).

If you have more than one agent, make sure that all of the agents are properly configured, otherwise set up agent Labels that would allow you to confine the job to that particular executor(s).

Also let’s ensure that executor node(s) has the latest version of dbatools installed:

Install-Module dbatools

Creating a new Jenkins job

Start with clicking ‘New’ item on the left-hand side of the Jenkins window:

2018-08-03_10-42-58

Input a new project name and make sure to choose a Freestyle project, which is a most generic Jenkins job type:

2018-08-03_14-24-52.jpg

In the job configuration window, set the project to be parameterized (click the corresponding checkbox). Add 5 String parameters by clicking the Add Parameter button; these parameters would represent server and database names for both source and destination databases, as well as a restore time parameter for point-in-time recovery:

2018-08-03_14_48-22

Parameter name of each of the added parameters will become an environment variable, so make sure to not to use special characters in the parameter name.

2018-08-03_10-47-42

Provide a descriptive definition for the parameters:

  • String parameter 1
    • Name: SourceServer
    • Default value:
    • Description: Full server name of the source instance
    • Trim the string: Yes
  • String parameter 2
    • Name: SourceDatabase
    • Default value:
    • Description: Name of the source database to copy
    • Trim the string: Yes
  • String parameter 3
    • Name: TargetServer
    • Default value:
    • Description: Full server name of the target instance
    • Trim the string: Yes
  • String parameter 4
    • Name: TargetDatabase
    • Default value:
    • Description: Name of the target database. Will be replaced, if exists.
    • Trim the string: Yes
  • String parameter 5
    • Name: ToDate
    • Default value:
    • Description: Specify if you want the database to be restored to a certain time, or leave blank to retrieve the latest backup
    • Trim the string: Yes

If you want this project to run only on specific agent nodes, click Restrict where this project can be run and provide space-separated list of labels in the field:

2018-08-03_11-00-22

Now we have our job parameterized and running only on specific nodes, let’s add a Powershell build step, that would represent the actual restore process. In the Build section of the job creation page click Add build step and select Windows PowerShell:

2018-08-03_12-00-12

In the Command textbox of the new Windows Powershell item insert the following code:

$p = @{
    SqlInstance = $env:TargetServer
    Database = $env:TargetDatabase
}
if ($env:ToDate) {
    $p += @{ RestoreTime = $env:ToDate}
}

Get-DbaBackupHistory -SqlInstance $env:SourceServer -Database $env:SourceDatabase | Restore-DbaDatabase @p -WithReplace -ReplaceDbNameInFile -TrustDbBackupHistory

 

This code will call two procedures from the dbatools module:

  • Get-DbaBackupHistory – retrieve backup history from the source server. Parameters:
    • SqlInstance – source SQL Server name
    • Database – source database name, which would be used to search for backup records
  • Restore-DbaDatabase – out of all the backup history records, restore the most relevant files, starting with full and followed by diff and log backups. Parameters:
    • SqlInstance – target SQL Server name
    • Database – desired database name on the target server
    • RestoreTime – will restore the database to the specified date/time. This parameter is added dynamically, based on whether the ToDate job parameter was specified.
    • WithReplace – replace the target database if it exists
    • ReplaceDbNameInFile – replace the part of the filename that contains the source database name with a new name
    • TrustDbBackupHistory – skip reading backup headers from the files, relying instead on the information from the source server

The parameters of the job are persisted through the execution as environment variables, which is why in order to use them we’ll have to use $env:VariableName notation.

Save the job by pressing the Save button on the bottom of the screen.

Running the job

Let’s start the job now! Click Build with Parameters from the job overview screen:

2018-08-03_14-45-02.jpg

Input all the parameters and start the job:

2018-08-03_14-47-07

In the Build History section of the page, you will see the progress of the job. By clicking on it you would be able to see the details, such as Console Output:

2018-08-03_14-49-02

Conclusion

In this example we used very few moving parts, which, however, enabled us to create a simple and effective tool that, from now on, is going to save us hours of time when it comes to restoring databases.

Stay tuned for the future updates!

Corruption in system object possibly due to schema or catalog inconsistency

I’ve been playing with the database snapshot feature on one of my lab SQL 2016 servers, when suddenly the snapshot operations started to throw me some weird errors:

Msg 5901, Level 16, State 1, Line 1
 One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
 Msg 1823, Level 16, State 2, Line 1
 A database snapshot cannot be created because it failed to start.
 Msg 211, Level 23, State 8, Line 1
 Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Well, hello there. To be honest, the server I was playing on was one of those servers that might easily lose battle to one of modern calculators, in other words, old SCSI drive “potato” server with decent number of cores and non-existing disk performance. Which, probably, triggered this corruption, because I was doing multiple attempts of snapshot creation -> restore from snapshot -> remove snapshot chain and running a restore operation in parallel. Each snapshot operation (that usually is instantaneous) was taking about 30 seconds.

One of them apparently was fatal for the database. So, what’s the plan, Doc?

Diagnostics

Since it’s a pretty small database, I can go with full-blown DBCC CHECKDB with EXTENDED_LOGICAL_CHECKS (you can consider running lighter checks first though, see https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql):

DBCC CHECKDB ('ci_dbUp') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS
Msg 5901, Level 16, State 1, Line 1
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 1823, Level 16, State 2, Line 1
A database snapshot cannot be created because it failed to start.
Msg 1823, Level 16, State 8, Line 1
A database snapshot cannot be created because it failed to start.
Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

DBCC CHECKCATALOG gives pretty much the same output.

SQL Server logs: same error number, memory dumps, no specifics.

I could try doing a restore (or partial restore), but the database is not in a logged mode, which would enable me to go with page restore. Also, msdb.dbo.suspect_pages table shows me zero suspected pages, so probably nothing that can be done here except for a full restore to the previous night.

Backup source files

Before doing any kind of recovery, it’s always a good idea to create a backup of your database files. Regular backup probably won’t do – you generally want a physical copy of the files, but we can still have it – just in case:

BACKUP DATABASE ci_dbUp TO DISK = 'd:\temp\ci_dbUp.bak'
Msg 5901, Level 16, State 1, Line 1 
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Hmm, okay. How about physical file copy? It will, however, require you to put the database into the OFFLINE mode, which means it probably won’t come back online willingly, as all the databases are required to pass the basic consistency checks before doing so, which obviously won’t happen this time. From here on, we’ll have to work with the database in an emergency mode, which disallows any access to the database.

ALTER DATABASE ci_dbUp SET OFFLINE WITH ROLLBACK IMMEDIATE

.. did not work!

Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Alrighty then, let’s shut down the whole server and copy the database files while it’s offline. That, of course, worked, but now we have our database in a ‘failed to recover’ state:

Surprisingly, it did not perform any consistency checks and simply put the database in the ‘Restoring’ mode:

The database 'ci_dbUp' is marked RESTORING and is in a state that does not allow recovery to be run.

Hmm, apparently, re-do log failed to apply after restart. Which means, it makes perfect sense to reconstruct the log file. Since I have the original files copied now, I would work on copies instead of originals.

Attach the database without the log file

Let’s try to re-attach the files with ATTACH_REBUILD_LOG parameter:

CREATE DATABASE ci_dbUp2 ON
( name = 'ci_dbUp', filename = 'I:\MSSQL13.MSSQLSERVER\ci_dbUp2.mdf'),
( name = 'ci_dbUp_log', filename = 'J:\MSSQL13.MSSQLSERVER\ci_dbUp2.ldf')
FOR ATTACH_REBUILD_LOG
Command(s) completed successfully.

Would you look at that! Apparently, there was some kind of stale transaction, that somehow kept an internal lock on a system table and couldn’t be killed by regular means. The only option was to remove that transaction from the re-do log by resetting the transaction log file, which effectively left me without the latest and greatest transactions, but that’s still pretty good result!

Now that the database is proven to be restore-able without data loss inside the data file, we can try running some DBCC and see if it brings any better results. But at this point we have already recovered the database – all checks are returning no errors whatsoever; snapshots and backups are working as expected.

Running DBCC CHECKDB with repair options

Since the database is in the recovering mode, the only way of bringing it online is to create an empty database with the exact same structure and then substitute the database files with the bad ones.

CREATE DATABASE [ci_dbUp_R]
 ON  PRIMARY 
( NAME = N'ci_dbUp', FILENAME = N'I:\MSSQL13.MSSQLSERVER\ci_dbUp_R.mdf' , SIZE = 32768KB , FILEGROWTH = 10%)
 LOG ON 
( NAME = N'ci_dbUp_log', FILENAME = N'J:\MSSQL13.MSSQLSERVER\ci_dbUp_R_log.ldf' , SIZE = 16384KB , FILEGROWTH = 10%)

Now, replace the files with the ones I’ve saved before:

ALTER DATABASE ci_dbUp_R SET OFFLINE WITH ROLLBACK IMMEDIATE
-- ...copy files...
ALTER DATABASE ci_dbUp_R SET ONLINE

Time for surgery! Let’s enable emergency mode for our database:

ALTER DATABASE ci_dbUp_R SET EMERGENCY

Now it looks like this:

emer

Confirming that the database is still in an inconsistent state:

DBCC CHECKDB ('ci_dbUp_R') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS

… and it returned no errors! Let’s try to bring it back online:

ALTER DATABASE ci_dbUp_R SET ONLINE

Command(s) completed successfully.

Awesome! We didn’t even need to run the recovery – re-attaching under a different name has completely resolved the issue.

Database recovery is always fun, as long as it’s not the production one. Stay tuned!

Job schedule at a glance

I often asked myself: can some of the SQL Server jobs I have be the reason of database slowness users are currently experiencing? On multiple occasions that was indeed the case and I had to reschedule some of them, because there were too many running at the same time. For example, re-indexing and full backups should really never run together, because that creates a huge strain on the server (and also makes your differential backups bigger). In order to see the clear picture of the jobs that are running on the server during the week, I have created a report that helped me to bring order (and justice) to my job schedules.

I present to you Job Schedule Matrix:

2018-01-12_8-55-02

This is an SSRS report that utilizes matrix object to display the frequency of the jobs running on your environment. It also comes with an ETL script to extract data from the SQL Servers and upload it to the database of your choice. It also can support other job types as long as you write your own ETL script and specify appropriate Server Type.

Download it from here: https://github.com/nvarscar/schedulematrix-report

It might be a little slow when you tell it to show all of your jobs from a multitude of servers, but otherwise works relatively quick. There is also a chart-based version that looks slightly more ugly, but overall is even faster.

Installation instructions are in the readme, and let me know if you have any problems with it!

PS if you are using different server types (other than SQL Server), make sure to publish instance names even if there is no such thing, as this is what you will eventually see in the report body.

Deploying SQL Server code: the good, the bad, the ugly

I’ve been asked to run anywhere from 50 to 200 deployment scripts more than several times throughout my career and it has always been a pain in the back to make sure that you are running them in a correct order in a proper database… Don’t know about you, but I never was a fan of mundane labor, even when it comes to deployments to prod.

Obviously, there should be already solutions on the market that allow you to do that without any manual intervention. But, turns out, there are not too many of them, and each has its own framework or even its own scripting requirements. As such, there are only two methodologies to consider: model-based and script-based deployments.

Model-based approach

In this deployment method, change code is generated by comparing current database with the source code of the database somewhere in the repository and generating transformation scripts for each single deployment.

The good:

  • Will ensure that database schema is 100% the same across all of your environments. Every deployment might be containing different transformation scripts, but the resulting schema will always be at it’s finest: all the changes in the schema will be enforced regardless of the database state.
  • Best possible integration with the source control management tools – every object has its own script, easy to branch out, easy to deploy – as long as you have necessary deployment tools.
  • No need to write transformation scripts – they will be generated automatically. Which means, no more sequential database upgrades from version A to version B: all of the changes would be implemented organically within one single deployment.
  • Team-oriented: easy to integrate multiple changes at once, as each object is treated independently.
  • Can easily be automated using any of the automation providers: Jenkins, Bamboo, VSTS, you name it. No need to track versions, deployment scenario is simple and straightforward. 3-rd party tools usually come with an impressive list of CI plugins and modules for CLI.

The bad:

  • Unpredictable transformations. With automatically generated code, it can be as efficient, as it can be not. Things like re-creating tables and indexes because of modified object definition can easily happen.
  • No automatic rollback procedure possible. Once the code is successfully deployed, there is no way back – the rollback will always be manual, be it a custom script or restore from a backup.
  • Once you started following this path you should be wary of using any other methodology, because objects that exist outside of the database definition in your source control might be removed from the database during next model-based deployment. Every single team should be on the same page and using the same set of tools.
  • The development and deployment process is somewhat different from the classical script-based approach and it will take time (and effort) for developers and administrators to adapt to the changes.

The ugly:

  • Black box deployment – you no longer control which code you run against your database server.
  • Not all of the frameworks can handle data manipulation statements during the deployments. In some of them you can source-control the whole table content (and enforce it every deployment), while others would only allow to import rows from a plain text file source. You can always deploy DML scripts outside of the model-based framework, but at that point you’ll have to support two independent frameworks.
  • While strict deployment scenario for your databases bring you excellent level of uniformity among your environments, it also brings along lots of limitations, and the biggest of them is that you become dependent on 3rd-party tools and can only operate within limits defined in those tools.

TL;DR

As a whole, this deployment method seems really promising and reliable, whenever it comes to managing multiple versions of the schema, especially when multiple teams are working on different features at the same time. This methodology would really help to integrate all the changes together and allow for Agile deployments, as there is no need in maintaining the order of the deployments and little to no dependency on the destination database code. However, it introduces some limitations to the deployment process, and while it might not be a big deal for some shops, it can easily become a show-stopper in others.

Deployment tools

There is a number of tools built around this deployment method. Here are the most popular ones:

Script-based approach

This more traditional deployment model relies on a series of transformations in order to achieve certain database state. Transformations are described in the code and are deployed in an order defined by developer(s).

Deployments in this methodology are usually done in bundles, where one or more SQL scripts are bundled together and represent one or more changes to the database structure. The main issue with automating such kind of deployments is that the scripts should always be run in the same exact order; deployment bundles should be deployed in specific order as well, thus requiring deployment framework to track the state of the database after each deployment.

The most common way I see around is to save script names that have been deployed to the database into a table within the same database:

2017-11-20_16-41-57

All the scripts are executed in alphabetical order. Once the script has been executed, it is recorded into the tracking table and will never be executed again to avoid schema corruption. Other option out there would be to write a code that is re-runnable, but certain order is still a necessity.

The good:

  • Can be adopted pretty much at any moment, as it does not introduce any alterations to the existing manual deployment process.
  • All transformations are visible in the code and will be identical in every single environment.
  • Custom DML scenarios can be easily added to the deployment scenario.
  • Developers can use any means of developing the code.
  • Simple structure allows for deep customization (transaction handling, pre/post-scripts, variable replacement, etc.).
  • Rollback scripts can be a part of the deployment bundle.

The bad:

  • Requires manual packaging most of the time – grouping scripts together into bundles and defining execution order.
  • Any change outside of the deployment framework has a chance to fail the deployment, as the scripts are expecting the database schema to be in a certain state.
  • If the database has more than one developer, there is a chance that two different scripts would have conflicting statements.
  • More opportunities for malicious code to get into the database due to using deployment scripts. Pre-deployment code analysis might be a good idea.

The ugly:

  • You can never be 100% sure that the database schema is valid. Manual changes will only flag once the deployment has failed due to inconsistency in the database schema.
  • All the transformations have to be applied in due order. Even if pending changes are nullified further on among the commands in a next-in-line bundle, they still have to be applied.
  • Schema version table will be created in each and every database. Both developers and administrators should be aware of it, as it controls the deployment behavior.

TL;DR

The most attractive part of this particular methodology is that it doesn’t really require any special development tools and at the same time does not introduce any limitations to the deployment scenario. Any deployment that involves running scripts can be integrated into a deployment bundle and be deployed automatically, as long as you have corresponding deployment tools. It is very straightforward and flexible, but also comes with a caveat of having more manual configuration/deployment steps as compared to model-based deployments.

Deployment tools

It is surprising, but there are not so many deployment tools that you would be able to use out-of-the box for this kind of deployments. The reason behind that, is probably due to the fact that the concept is actually very simple and it can be easier to write a home-grown framework on your favorite scripting language than adopt an existing solution. Here are some of them:

  • Flyway (free executable)
  • DbUp (open-source .Net library, requires wrapper code to initiate the deployment)
  • dbdeploy (free java library, initialized by an ant script)
  • MigrateDB (free executable, uses custom script format)
  • AutoPatch (open-source java application)

Post Scriptum

There are a few other resources that are definitely worth checking if you want to learn more about automated deployments:

 

Managing SQL Server login passwords with #dbatools

Just had a fantastic use case for one of the dbatools commands, Get-DbaLogin.

Imagine that you need to change SQL password for a single login on all of the production machines in your environment. You could write a query and run it on multiple servers, for that though you would have to have all of your production servers registered under the same Registered Servers folder in SSMS. If that’s not the case – you’ll have to run it multiple times. Let’s see how easy it is to do it in Powershell console:

#using CMS - getting all the servers with 'prod' in the urn path
$servers = (Get-DbaRegisteredServer -SqlInstance MyCMSInstanceName | Where { $_.Urn -match 'prod' } ).ServerName
#or manually: $servers = 'server1,server2,server3'.Split(',')
#Get login objects from each server
$logins = Get-DbaLogin -SqlInstance $servers -Login accountname
#Set passwords
foreach ($l in $logins) { $l.ChangePassword('MySecureP@ssw0rd'); $l.Alter(); Write-host "Changed password on" $l.SqlInstance }
#Check status
$logins.Refresh()
$logins | Select SqlInstance, DateLastModified, IsLocked

What if some of them became locked because of the apps hitting the servers in the background constantly? SSMS in this case would force you to change the password when unlocking the account (or disabling password policy). If we checked ChangePassword method, we will see that there are several overrides among them and we can change the passwords again and unlock the accounts at the same time:

$logins[0].ChangePassword

OverloadDefinitions
-------------------
void ChangePassword(string newPassword)
void ChangePassword(securestring newPassword)
void ChangePassword(string oldPassword, string newPassword)
void ChangePassword(securestring oldPassword, securestring newPassword)
void ChangePassword(string newPassword, bool unlock, bool mustChange)
void ChangePassword(securestring newPassword, bool unlock, bool mustChange)

Let’s ensure that all the accounts are unlocked and the password is not required to be changed:

foreach ($l in $logins) { $l.ChangePassword('MySecureP@ssw0rd', $true, $false); $l.Alter(); Write-host "Changed password on" $l.SqlInstance }

Enjoy!

Dealing with hashed SQL passwords in Powershell

Let’s talk real quick about creating logins using hashed passwords in Powershell. First things first – it is absolutely not necessary to calculate hash if you just want to use a plain text password. Just use the following code:

#Create SMO connection to our SQL Server :
$server = New-Object Microsoft.SqlServer.Management.Smo.Server('myserver\myinstance')
#Create new Login object:
$newLogin = New-Object Microsoft.SqlServer.Management.Smo.Login($server, 'myLoginName')
$newLogin.LoginType = "SqlLogin"
$newLogin.Create('Pa$$w0rd!')

If we checked the .Create method overload definitions, we would see the following:

> $newLogin.Create

OverloadDefinitions
-------------------
void Create()
void Create(string password)
void Create(securestring password)
void Create(string password, Microsoft.SqlServer.Management.Smo.LoginCreateOptions options)
void Create(securestring password, Microsoft.SqlServer.Management.Smo.LoginCreateOptions options)
void ICreatable.Create()

Let’s go through each of them:

  • Create() – handles passwordless logins: Windows logins, and logins based on certificates and asymmetric keys
  • Create(string password) – takes care of the scenario above: plain text password
  • Create(securestring password) – will accept [securestring] object instead of [string]. One can always convert a string into a securestring using ConvertTo-SecureString $string -AsPlainText -Force

Let’s take a closer look onto the last two overloads, which, along with previously mentioned [string]/[securestring] password string, utilize Microsoft.SqlServer.Management.Smo.LoginCreateOptions class. Let’s check, what kind of class it is:

> [Microsoft.SqlServer.Management.Smo.LoginCreateOptions]

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     LoginCreateOptions                       System.Enum

# Since it's enum, we can check its members
> [Microsoft.SqlServer.Management.Smo.LoginCreateOptions].GetMembers()|where { $_.isStatic -eq $true } | select Name

Name
----
None
IsHashed
MustChange

Turns out, we can pass a hashed value instead of plain text password. But how to generate the hash value? I did some research, and the algorithm doesn’t seem too complicated. This article describes a way to create a password hash using T-SQL, so why not to replicate it in Powershell? The only thing we need is a class that would perform SHA-* hashing for us, and [Security.Cryptography.HashAlgorithm] would be our choice here.
Now, according to the same article, hashing algorithm was changed twice in the history of SQL Server. It’s totally up to you if you want to support previous versions of SQL Server in your code, but in order to achieve maximum compatibility, I would recreate all three methods in the code below.
First, we need to know, which algorithm to use by defining $SqlMajorVersion variable:

if ($SqlMajorVersion -lt 11) { 
	$algorithm = 'SHA1' 
	$hashVersion = '0100'
}
else { 
	$algorithm = 'SHA512'
	$hashVersion = '0200'
}

As you can see, hash algorithm was switched to SHA512 starting from SQL 2012 (v11). First two bytes of the hash string will point to the version of the hashing algorithm (0200 for SHA512) – thus SQL will know which algorithm was used to create this hash.
Next. We need 4 bytes of salt – to randomize the hashing algorithm. We will also convert it to the hex string – which will be used later on in the output.

[byte[]]$byteSalt = $null
0 .. 3 | ForEach-Object { $byteSalt += Get-Random -Minimum 0 -Maximum 255 }
[string]$stringSalt = ""
$byteSalt | ForEach-Object { $stringSalt += ("{0:X}" -f $_).PadLeft(2, "0") }

It’s time to create our hash! Hash function accepts bytes, so our string should be converted to bytes using Unicode encoding. Salt bytes would be added right after the password to ensure that empty strings hashes are still different depending on the salt.

$plainPassword = 'MyV3rySecur3P@ssw0rd!'
#Get byte representation of the password string
$enc = [system.Text.Encoding]::Unicode
$data = $enc.GetBytes($plainPassword)
#Run hash algorithm
$hash = [Security.Cryptography.HashAlgorithm]::Create($algorithm)
$bytes = $hash.ComputeHash($data+$byteSalt)

All right! Now that we have our hashed bytes – let’s convert them to the hex string and get our password hash! First, we’re adding an algorithm marker, then salt, then our hash and, eventually (if we need it), uppercase hash for SQL 2000- instances (never wondered how SQL 2000 passwords can be case-insensitive?):

#Construct hex string
$hashString = "0x$hashVersion$stringSalt"
$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
#Add UPPERCASE hash for SQL 2000 and lower
if ($SqlMajorVersion -lt 9) {
	$data = $enc.GetBytes($plainPassword.ToUpper())
	$bytes = $hash.ComputeHash($data+$byteSalt)
	$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
}
$hashString

Now we can create our login using the hash string above!

$newLogin.Create($hashString, [Microsoft.SqlServer.Management.Smo.LoginCreateOptions]::IsHashed)

And that’s it! In the end of this post you can find the whole script aggregated into a function that accepts three parameters: password, SQL major version and (optional) salt.
This whole research was conducted after I decided that passing plain text passwords across the network is too pedestrian and started looking for other options. This function now supports login creation process in the New-DbaLogin cmdlet, which has become one of many functions included into the dbatools Powershell module.
Stay tuned!

function Get-PasswordHash {
<#
	.SYNOPSIS
	Generates a password hash for SQL Server login
	
	.DESCRIPTION
	Generates a hash string based on the plaintext or securestring password and a SQL Server version. Salt is optional
		
	.PARAMETER Password
	Either plain text or Securestring password
	
	.PARAMETER SqlMajorVersion
	Major version of the SQL Server. Defines the hash algorythm.
	
	.PARAMETER byteSalt
	Optional. Inserts custom salt into the hash instead of randomly generating new salt
		
	.NOTES
	Tags: Login, Internal
	Author: Kirill Kravtsov (@nvarscar)
	dbatools PowerShell module (https://dbatools.io, clemaire@gmail.com)
	Copyright (C) 2016 Chrissy LeMaire
	
	This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
	
	This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
	
	You should have received a copy of the GNU General Public License along with this program.  If not, see .

	
	.EXAMPLE
	Get-PasswordHash $securePassword 11
	
	Generates password hash for SQL 2012
	
	.EXAMPLE
	Get-PasswordHash $securePassword 9 $byte
	
	Generates password hash for SQL 2005 using custom salt from the $byte variable
	
#>
	Param (
		[object]$Password,
		$SqlMajorVersion,
		[byte[]]$byteSalt
	)
	#Choose hash algorithm
	if ($SqlMajorVersion -lt 11) { 
		$algorithm = 'SHA1' 
		$hashVersion = '0100'
	}
	else { 
		$algorithm = 'SHA512'
		$hashVersion = '0200'
	}
	
	#Generate salt	
	if (!$byteSalt) {
		0 .. 3 | ForEach-Object { $byteSalt += Get-Random -Minimum 0 -Maximum 255 }
	}
	
	#Convert salt to a hex string
	[string]$stringSalt = ""
	$byteSalt | ForEach-Object { $stringSalt += ("{0:X}" -f $_).PadLeft(2, "0") }
	
	#Extract password
	if ($Password.GetType().Name -eq 'SecureString') {
		$cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'foo', $Password
		$plainPassword = $cred.GetNetworkCredential().Password
	}
	else {
		$plainPassword = $Password
	}
	#Get byte representation of the password string
	$enc = [system.Text.Encoding]::Unicode
	$data = $enc.GetBytes($plainPassword)
	#Run hash algorithm
	$hash = [Security.Cryptography.HashAlgorithm]::Create($algorithm)
	$bytes = $hash.ComputeHash($data+$byteSalt)
	#Construct hex string
	$hashString = "0x$hashVersion$stringSalt"
	$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
	#Add UPPERCASE hash for SQL 2000 and lower
	if ($SqlMajorVersion -lt 9) {
		$data = $enc.GetBytes($plainPassword.ToUpper())
		$bytes = $hash.ComputeHash($data+$byteSalt)
		$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
	}
	Return $hashString
}