Corruption in system object possibly due to schema or catalog inconsistency

I’ve been playing with the database snapshot feature on one of my lab SQL 2016 servers, when suddenly the snapshot operations started to throw me some weird errors:

Msg 5901, Level 16, State 1, Line 1
 One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
 Msg 1823, Level 16, State 2, Line 1
 A database snapshot cannot be created because it failed to start.
 Msg 211, Level 23, State 8, Line 1
 Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Well, hello there. To be honest, the server I was playing on was one of those servers that might easily lose battle to one of modern calculators, in other words, old SCSI drive “potato” server with decent number of cores and non-existing disk performance. Which, probably, triggered this corruption, because I was doing multiple attempts of snapshot creation -> restore from snapshot -> remove snapshot chain and running a restore operation in parallel. Each snapshot operation (that usually is instantaneous) was taking about 30 seconds.

One of them apparently was fatal for the database. So, what’s the plan, Doc?

Diagnostics

Since it’s a pretty small database, I can go with full-blown DBCC CHECKDB with EXTENDED_LOGICAL_CHECKS (you can consider running lighter checks first though, see https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql):

DBCC CHECKDB ('ci_dbUp') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS
Msg 5901, Level 16, State 1, Line 1
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 1823, Level 16, State 2, Line 1
A database snapshot cannot be created because it failed to start.
Msg 1823, Level 16, State 8, Line 1
A database snapshot cannot be created because it failed to start.
Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

DBCC CHECKCATALOG gives pretty much the same output.

SQL Server logs: same error number, memory dumps, no specifics.

I could try doing a restore (or partial restore), but the database is not in a logged mode, which would enable me to go with page restore. Also, msdb.dbo.suspect_pages table shows me zero suspected pages, so probably nothing that can be done here except for a full restore to the previous night.

Backup source files

Before doing any kind of recovery, it’s always a good idea to create a backup of your database files. Regular backup probably won’t do – you generally want a physical copy of the files, but we can still have it – just in case:

BACKUP DATABASE ci_dbUp TO DISK = 'd:\temp\ci_dbUp.bak'
Msg 5901, Level 16, State 1, Line 1 
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Hmm, okay. How about physical file copy? It will, however, require you to put the database into the OFFLINE mode, which means it probably won’t come back online willingly, as all the databases are required to pass the basic consistency checks before doing so, which obviously won’t happen this time. From here on, we’ll have to work with the database in an emergency mode, which disallows any access to the database.

ALTER DATABASE ci_dbUp SET OFFLINE WITH ROLLBACK IMMEDIATE

.. did not work!

Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Alrighty then, let’s shut down the whole server and copy the database files while it’s offline. That, of course, worked, but now we have our database in a ‘failed to recover’ state:

Surprisingly, it did not perform any consistency checks and simply put the database in the ‘Restoring’ mode:

The database 'ci_dbUp' is marked RESTORING and is in a state that does not allow recovery to be run.

Hmm, apparently, re-do log failed to apply after restart. Which means, it makes perfect sense to reconstruct the log file. Since I have the original files copied now, I would work on copies instead of originals.

Attach the database without the log file

Let’s try to re-attach the files with ATTACH_REBUILD_LOG parameter:

CREATE DATABASE ci_dbUp2 ON
( name = 'ci_dbUp', filename = 'I:\MSSQL13.MSSQLSERVER\ci_dbUp2.mdf'),
( name = 'ci_dbUp_log', filename = 'J:\MSSQL13.MSSQLSERVER\ci_dbUp2.ldf')
FOR ATTACH_REBUILD_LOG
Command(s) completed successfully.

Would you look at that! Apparently, there was some kind of stale transaction, that somehow kept an internal lock on a system table and couldn’t be killed by regular means. The only option was to remove that transaction from the re-do log by resetting the transaction log file, which effectively left me without the latest and greatest transactions, but that’s still pretty good result!

Now that the database is proven to be restore-able without data loss inside the data file, we can try running some DBCC and see if it brings any better results. But at this point we have already recovered the database – all checks are returning no errors whatsoever; snapshots and backups are working as expected.

Running DBCC CHECKDB with repair options

Since the database is in the recovering mode, the only way of bringing it online is to create an empty database with the exact same structure and then substitute the database files with the bad ones.

CREATE DATABASE [ci_dbUp_R]
 ON  PRIMARY 
( NAME = N'ci_dbUp', FILENAME = N'I:\MSSQL13.MSSQLSERVER\ci_dbUp_R.mdf' , SIZE = 32768KB , FILEGROWTH = 10%)
 LOG ON 
( NAME = N'ci_dbUp_log', FILENAME = N'J:\MSSQL13.MSSQLSERVER\ci_dbUp_R_log.ldf' , SIZE = 16384KB , FILEGROWTH = 10%)

Now, replace the files with the ones I’ve saved before:

ALTER DATABASE ci_dbUp_R SET OFFLINE WITH ROLLBACK IMMEDIATE
-- ...copy files...
ALTER DATABASE ci_dbUp_R SET ONLINE

Time for surgery! Let’s enable emergency mode for our database:

ALTER DATABASE ci_dbUp_R SET EMERGENCY

Now it looks like this:

emer

Confirming that the database is still in an inconsistent state:

DBCC CHECKDB ('ci_dbUp_R') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS

… and it returned no errors! Let’s try to bring it back online:

ALTER DATABASE ci_dbUp_R SET ONLINE

Command(s) completed successfully.

Awesome! We didn’t even need to run the recovery – re-attaching under a different name has completely resolved the issue.

Database recovery is always fun, as long as it’s not the production one. Stay tuned!

Job schedule at a glance

I often asked myself: can some of the SQL Server jobs I have be the reason of database slowness users are currently experiencing? On multiple occasions that was indeed the case and I had to reschedule some of them, because there were too many running at the same time. For example, re-indexing and full backups should really never run together, because that creates a huge strain on the server (and also makes your differential backups bigger). In order to see the clear picture of the jobs that are running on the server during the week, I have created a report that helped me to bring order (and justice) to my job schedules.

I present to you Job Schedule Matrix:

2018-01-12_8-55-02

This is an SSRS report that utilizes matrix object to display the frequency of the jobs running on your environment. It also comes with an ETL script to extract data from the SQL Servers and upload it to the database of your choice. It also can support other job types as long as you write your own ETL script and specify appropriate Server Type.

Download it from here: https://github.com/nvarscar/schedulematrix-report

It might be a little slow when you tell it to show all of your jobs from a multitude of servers, but otherwise works relatively quick. There is also a chart-based version that looks slightly more ugly, but overall is even faster.

Installation instructions are in the readme, and let me know if you have any problems with it!

PS if you are using different server types (other than SQL Server), make sure to publish instance names even if there is no such thing, as this is what you will eventually see in the report body.

Deploying SQL Server code: the good, the bad, the ugly

I’ve been asked to run anywhere from 50 to 200 deployment scripts more than several times throughout my career and it has always been a pain in the back to make sure that you are running them in a correct order in a proper database… Don’t know about you, but I never was a fan of mundane labor, even when it comes to deployments to prod.

Obviously, there should be already solutions on the market that allow you to do that without any manual intervention. But, turns out, there are not too many of them, and each has its own framework or even its own scripting requirements. As such, there are only two methodologies to consider: model-based and script-based deployments.

Model-based approach

In this deployment method, change code is generated by comparing current database with the source code of the database somewhere in the repository and generating transformation scripts for each single deployment.

The good:

  • Will ensure that database schema is 100% the same across all of your environments. Every deployment might be containing different transformation scripts, but the resulting schema will always be at it’s finest: all the changes in the schema will be enforced regardless of the database state.
  • Best possible integration with the source control management tools – every object has its own script, easy to branch out, easy to deploy – as long as you have necessary deployment tools.
  • No need to write transformation scripts – they will be generated automatically. Which means, no more sequential database upgrades from version A to version B: all of the changes would be implemented organically within one single deployment.
  • Team-oriented: easy to integrate multiple changes at once, as each object is treated independently.
  • Can easily be automated using any of the automation providers: Jenkins, Bamboo, VSTS, you name it. No need to track versions, deployment scenario is simple and straightforward. 3-rd party tools usually come with an impressive list of CI plugins and modules for CLI.

The bad:

  • Unpredictable transformations. With automatically generated code, it can be as efficient, as it can be not. Things like re-creating tables and indexes because of modified object definition can easily happen.
  • No automatic rollback procedure possible. Once the code is successfully deployed, there is no way back – the rollback will always be manual, be it a custom script or restore from a backup.
  • Once you started following this path you should be wary of using any other methodology, because objects that exist outside of the database definition in your source control might be removed from the database during next model-based deployment. Every single team should be on the same page and using the same set of tools.
  • The development and deployment process is somewhat different from the classical script-based approach and it will take time (and effort) for developers and administrators to adapt to the changes.

The ugly:

  • Black box deployment – you no longer control which code you run against your database server.
  • Not all of the frameworks can handle data manipulation statements during the deployments. In some of them you can source-control the whole table content (and enforce it every deployment), while others would only allow to import rows from a plain text file source. You can always deploy DML scripts outside of the model-based framework, but at that point you’ll have to support two independent frameworks.
  • While strict deployment scenario for your databases bring you excellent level of uniformity among your environments, it also brings along lots of limitations, and the biggest of them is that you become dependent on 3rd-party tools and can only operate within limits defined in those tools.

TL;DR

As a whole, this deployment method seems really promising and reliable, whenever it comes to managing multiple versions of the schema, especially when multiple teams are working on different features at the same time. This methodology would really help to integrate all the changes together and allow for Agile deployments, as there is no need in maintaining the order of the deployments and little to no dependency on the destination database code. However, it introduces some limitations to the deployment process, and while it might not be a big deal for some shops, it can easily become a show-stopper in others.

Deployment tools

There is a number of tools built around this deployment method. Here are the most popular ones:

Script-based approach

This more traditional deployment model relies on a series of transformations in order to achieve certain database state. Transformations are described in the code and are deployed in an order defined by developer(s).

Deployments in this methodology are usually done in bundles, where one or more SQL scripts are bundled together and represent one or more changes to the database structure. The main issue with automating such kind of deployments is that the scripts should always be run in the same exact order; deployment bundles should be deployed in specific order as well, thus requiring deployment framework to track the state of the database after each deployment.

The most common way I see around is to save script names that have been deployed to the database into a table within the same database:

2017-11-20_16-41-57

All the scripts are executed in alphabetical order. Once the script has been executed, it is recorded into the tracking table and will never be executed again to avoid schema corruption. Other option out there would be to write a code that is re-runnable, but certain order is still a necessity.

The good:

  • Can be adopted pretty much at any moment, as it does not introduce any alterations to the existing manual deployment process.
  • All transformations are visible in the code and will be identical in every single environment.
  • Custom DML scenarios can be easily added to the deployment scenario.
  • Developers can use any means of developing the code.
  • Simple structure allows for deep customization (transaction handling, pre/post-scripts, variable replacement, etc.).
  • Rollback scripts can be a part of the deployment bundle.

The bad:

  • Requires manual packaging most of the time – grouping scripts together into bundles and defining execution order.
  • Any change outside of the deployment framework has a chance to fail the deployment, as the scripts are expecting the database schema to be in a certain state.
  • If the database has more than one developer, there is a chance that two different scripts would have conflicting statements.
  • More opportunities for malicious code to get into the database due to using deployment scripts. Pre-deployment code analysis might be a good idea.

The ugly:

  • You can never be 100% sure that the database schema is valid. Manual changes will only flag once the deployment has failed due to inconsistency in the database schema.
  • All the transformations have to be applied in due order. Even if pending changes are nullified further on among the commands in a next-in-line bundle, they still have to be applied.
  • Schema version table will be created in each and every database. Both developers and administrators should be aware of it, as it controls the deployment behavior.

TL;DR

The most attractive part of this particular methodology is that it doesn’t really require any special development tools and at the same time does not introduce any limitations to the deployment scenario. Any deployment that involves running scripts can be integrated into a deployment bundle and be deployed automatically, as long as you have corresponding deployment tools. It is very straightforward and flexible, but also comes with a caveat of having more manual configuration/deployment steps as compared to model-based deployments.

Deployment tools

It is surprising, but there are not so many deployment tools that you would be able to use out-of-the box for this kind of deployments. The reason behind that, is probably due to the fact that the concept is actually very simple and it can be easier to write a home-grown framework on your favorite scripting language than adopt an existing solution. Here are some of them:

  • Flyway (free executable)
  • DbUp (open-source .Net library, requires wrapper code to initiate the deployment)
  • dbdeploy (free java library, initialized by an ant script)
  • MigrateDB (free executable, uses custom script format)
  • AutoPatch (open-source java application)

Post Scriptum

There are a few other resources that are definitely worth checking if you want to learn more about automated deployments:

 

Managing SQL Server login passwords with #dbatools

Just had a fantastic use case for one of the dbatools commands, Get-DbaLogin.

Imagine that you need to change SQL password for a single login on all of the production machines in your environment. You could write a query and run it on multiple servers, for that though you would have to have all of your production servers registered under the same Registered Servers folder in SSMS. If that’s not the case – you’ll have to run it multiple times. Let’s see how easy it is to do it in Powershell console:

#using CMS - getting all the servers with 'prod' in the urn path
$servers = (Get-DbaRegisteredServer -SqlInstance MyCMSInstanceName | Where { $_.Urn -match 'prod' } ).ServerName
#or manually: $servers = 'server1,server2,server3'.Split(',')
#Get login objects from each server
$logins = Get-DbaLogin -SqlInstance $servers -Login accountname
#Set passwords
foreach ($l in $logins) { $l.ChangePassword('MySecureP@ssw0rd'); $l.Alter(); Write-host "Changed password on" $l.SqlInstance }
#Check status
$logins.Refresh()
$logins | Select SqlInstance, DateLastModified, IsLocked

What if some of them became locked because of the apps hitting the servers in the background constantly? SSMS in this case would force you to change the password when unlocking the account (or disabling password policy). If we checked ChangePassword method, we will see that there are several overrides among them and we can change the passwords again and unlock the accounts at the same time:

$logins[0].ChangePassword

OverloadDefinitions
-------------------
void ChangePassword(string newPassword)
void ChangePassword(securestring newPassword)
void ChangePassword(string oldPassword, string newPassword)
void ChangePassword(securestring oldPassword, securestring newPassword)
void ChangePassword(string newPassword, bool unlock, bool mustChange)
void ChangePassword(securestring newPassword, bool unlock, bool mustChange)

Let’s ensure that all the accounts are unlocked and the password is not required to be changed:

foreach ($l in $logins) { $l.ChangePassword('MySecureP@ssw0rd', $true, $false); $l.Alter(); Write-host "Changed password on" $l.SqlInstance }

Enjoy!

Dealing with hashed SQL passwords in Powershell

Let’s talk real quick about creating logins using hashed passwords in Powershell. First things first – it is absolutely not necessary to calculate hash if you just want to use a plain text password. Just use the following code:

#Create SMO connection to our SQL Server :
$server = New-Object Microsoft.SqlServer.Management.Smo.Server('myserver\myinstance')
#Create new Login object:
$newLogin = New-Object Microsoft.SqlServer.Management.Smo.Login($server, 'myLoginName')
$newLogin.LoginType = "SqlLogin"
$newLogin.Create('Pa$$w0rd!')

If we checked the .Create method overload definitions, we would see the following:

> $newLogin.Create

OverloadDefinitions
-------------------
void Create()
void Create(string password)
void Create(securestring password)
void Create(string password, Microsoft.SqlServer.Management.Smo.LoginCreateOptions options)
void Create(securestring password, Microsoft.SqlServer.Management.Smo.LoginCreateOptions options)
void ICreatable.Create()

Let’s go through each of them:

  • Create() – handles passwordless logins: Windows logins, and logins based on certificates and asymmetric keys
  • Create(string password) – takes care of the scenario above: plain text password
  • Create(securestring password) – will accept [securestring] object instead of [string]. One can always convert a string into a securestring using ConvertTo-SecureString $string -AsPlainText -Force

Let’s take a closer look onto the last two overloads, which, along with previously mentioned [string]/[securestring] password string, utilize Microsoft.SqlServer.Management.Smo.LoginCreateOptions class. Let’s check, what kind of class it is:

> [Microsoft.SqlServer.Management.Smo.LoginCreateOptions]

IsPublic IsSerial Name                                     BaseType
-------- -------- ----                                     --------
True     True     LoginCreateOptions                       System.Enum

# Since it's enum, we can check its members
> [Microsoft.SqlServer.Management.Smo.LoginCreateOptions].GetMembers()|where { $_.isStatic -eq $true } | select Name

Name
----
None
IsHashed
MustChange

Turns out, we can pass a hashed value instead of plain text password. But how to generate the hash value? I did some research, and the algorithm doesn’t seem too complicated. This article describes a way to create a password hash using T-SQL, so why not to replicate it in Powershell? The only thing we need is a class that would perform SHA-* hashing for us, and [Security.Cryptography.HashAlgorithm] would be our choice here.
Now, according to the same article, hashing algorithm was changed twice in the history of SQL Server. It’s totally up to you if you want to support previous versions of SQL Server in your code, but in order to achieve maximum compatibility, I would recreate all three methods in the code below.
First, we need to know, which algorithm to use by defining $SqlMajorVersion variable:

if ($SqlMajorVersion -lt 11) { 
	$algorithm = 'SHA1' 
	$hashVersion = '0100'
}
else { 
	$algorithm = 'SHA512'
	$hashVersion = '0200'
}

As you can see, hash algorithm was switched to SHA512 starting from SQL 2012 (v11). First two bytes of the hash string will point to the version of the hashing algorithm (0200 for SHA512) – thus SQL will know which algorithm was used to create this hash.
Next. We need 4 bytes of salt – to randomize the hashing algorithm. We will also convert it to the hex string – which will be used later on in the output.

[byte[]]$byteSalt = $null
0 .. 3 | ForEach-Object { $byteSalt += Get-Random -Minimum 0 -Maximum 255 }
[string]$stringSalt = ""
$byteSalt | ForEach-Object { $stringSalt += ("{0:X}" -f $_).PadLeft(2, "0") }

It’s time to create our hash! Hash function accepts bytes, so our string should be converted to bytes using Unicode encoding. Salt bytes would be added right after the password to ensure that empty strings hashes are still different depending on the salt.

$plainPassword = 'MyV3rySecur3P@ssw0rd!'
#Get byte representation of the password string
$enc = [system.Text.Encoding]::Unicode
$data = $enc.GetBytes($plainPassword)
#Run hash algorithm
$hash = [Security.Cryptography.HashAlgorithm]::Create($algorithm)
$bytes = $hash.ComputeHash($data+$byteSalt)

All right! Now that we have our hashed bytes – let’s convert them to the hex string and get our password hash! First, we’re adding an algorithm marker, then salt, then our hash and, eventually (if we need it), uppercase hash for SQL 2000- instances (never wondered how SQL 2000 passwords can be case-insensitive?):

#Construct hex string
$hashString = "0x$hashVersion$stringSalt"
$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
#Add UPPERCASE hash for SQL 2000 and lower
if ($SqlMajorVersion -lt 9) {
	$data = $enc.GetBytes($plainPassword.ToUpper())
	$bytes = $hash.ComputeHash($data+$byteSalt)
	$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
}
$hashString

Now we can create our login using the hash string above!

$newLogin.Create($hashString, [Microsoft.SqlServer.Management.Smo.LoginCreateOptions]::IsHashed)

And that’s it! In the end of this post you can find the whole script aggregated into a function that accepts three parameters: password, SQL major version and (optional) salt.
This whole research was conducted after I decided that passing plain text passwords across the network is too pedestrian and started looking for other options. This function now supports login creation process in the New-DbaLogin cmdlet, which has become one of many functions included into the dbatools Powershell module.
Stay tuned!

function Get-PasswordHash {
<#
	.SYNOPSIS
	Generates a password hash for SQL Server login
	
	.DESCRIPTION
	Generates a hash string based on the plaintext or securestring password and a SQL Server version. Salt is optional
		
	.PARAMETER Password
	Either plain text or Securestring password
	
	.PARAMETER SqlMajorVersion
	Major version of the SQL Server. Defines the hash algorythm.
	
	.PARAMETER byteSalt
	Optional. Inserts custom salt into the hash instead of randomly generating new salt
		
	.NOTES
	Tags: Login, Internal
	Author: Kirill Kravtsov (@nvarscar)
	dbatools PowerShell module (https://dbatools.io, clemaire@gmail.com)
	Copyright (C) 2016 Chrissy LeMaire
	
	This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.
	
	This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
	
	You should have received a copy of the GNU General Public License along with this program.  If not, see .

	
	.EXAMPLE
	Get-PasswordHash $securePassword 11
	
	Generates password hash for SQL 2012
	
	.EXAMPLE
	Get-PasswordHash $securePassword 9 $byte
	
	Generates password hash for SQL 2005 using custom salt from the $byte variable
	
#>
	Param (
		[object]$Password,
		$SqlMajorVersion,
		[byte[]]$byteSalt
	)
	#Choose hash algorithm
	if ($SqlMajorVersion -lt 11) { 
		$algorithm = 'SHA1' 
		$hashVersion = '0100'
	}
	else { 
		$algorithm = 'SHA512'
		$hashVersion = '0200'
	}
	
	#Generate salt	
	if (!$byteSalt) {
		0 .. 3 | ForEach-Object { $byteSalt += Get-Random -Minimum 0 -Maximum 255 }
	}
	
	#Convert salt to a hex string
	[string]$stringSalt = ""
	$byteSalt | ForEach-Object { $stringSalt += ("{0:X}" -f $_).PadLeft(2, "0") }
	
	#Extract password
	if ($Password.GetType().Name -eq 'SecureString') {
		$cred = New-Object System.Management.Automation.PSCredential -ArgumentList 'foo', $Password
		$plainPassword = $cred.GetNetworkCredential().Password
	}
	else {
		$plainPassword = $Password
	}
	#Get byte representation of the password string
	$enc = [system.Text.Encoding]::Unicode
	$data = $enc.GetBytes($plainPassword)
	#Run hash algorithm
	$hash = [Security.Cryptography.HashAlgorithm]::Create($algorithm)
	$bytes = $hash.ComputeHash($data+$byteSalt)
	#Construct hex string
	$hashString = "0x$hashVersion$stringSalt"
	$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
	#Add UPPERCASE hash for SQL 2000 and lower
	if ($SqlMajorVersion -lt 9) {
		$data = $enc.GetBytes($plainPassword.ToUpper())
		$bytes = $hash.ComputeHash($data+$byteSalt)
		$bytes | ForEach-Object { $hashString += ("{0:X2}" -f $_).PadLeft(2, "0") }
	}
	Return $hashString
}