As ALWAYS, TEST FIRST IN A TEST ENVIRONMENT (I DID 😉 )!!!
HAVE FUN!
PS: Got any feedback or request, please use Github to report bugs or requests! Thanks!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
As ALWAYS, TEST FIRST IN A TEST ENVIRONMENT (I DID 😉 )!!!
HAVE FUN!
PS: Got any feedback or request, please use Github to report bugs or requests! Thanks!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
As ALWAYS, TEST FIRST IN A TEST ENVIRONMENT (I DID 😉 )!!!
HAVE FUN!
PS: Got any feedback or request, please use Github to report bugs or requests! Thanks!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
Some days ago I was reading REDDIT and stumbled across this post, where someone mistakenly deleted a DNS application partition. I responded to that post by providing steps on how to recover. In addition I also wanted to refresh my memory and see if i had written down was correct or not. All I wrote in that REDDIT post was correct with the addition of having forgotten 1 important step!
WARNING: The steps listed here apply to restoring a deleted Application Partition in Active Directory. These steps DO NOT apply to restoring a deleted AD Domain!
To test this all out I used the following TEST environment:
AD Forest: IAMTEC.NET
Forest Root AD Domain: IAMTEC.NET –> 3x live RWDCs – W2K19 – R1FSRWDC1 (All Forest And Domain FSMO Roles), R1FSRWDC2, R1FSRWDC3) and 1x live RODC (W2K19 – R1FSRODC1) and 1x bogus RODC (RIVERBED)
Figure 1: Creating The Application Partition And Enlisting Additional DCs To Host It
Gave the DCs some minutes to replicate the partition data around.
Figure 2: The Definition (I.e., crossRef Object) Of The Application Partition
After this, I created a full backup of the AD forest using Semperis ADFR. After having confirmed the backup had completed successfully, it was time to start with the “mistaken” deletion. I used the following code to delete the application partition
NTDSUTIL
ACTIVATE INSTANCE NTDS
PARTITION MANAGEMENT
CONNECTIONS
CONNECT TO SERVER R1FSRWDC1.IAMTEC.NET
Q
LIST NC REPLICAS DC=APP_NC_TEST
DELETE NC DC=APP_NC_TEST
Q
Q
Figure 3: Listing The DCs That Host The Application Partition And Deleting The Application Partition
Gave the DCs some minutes to replicate the deletion of the partition data around.
To make sure all was deleted I used the following code:
With the following event from every DC hosting the partition, I knew for sure the partition did not exist anywhere anymore!
Figure 4: Confirming The Partition Was Deleted/Removed On All DCs In The AD Forest (In This Case!)
After the deletion of the data, I used Semperis ADFR to non-authoritatively restore the DC R1FSRWDC2.IAMTEC.NET using the most recent backup created.
Figure 5a: Non-Authoritative Restore Using Semperis ADFR Figure 5b: Non-Authoritative Restore Using Semperis ADFR Figure 5c: Non-Authoritative Restore Using Semperis ADFR
After the non-authoritative restore the DC R1FSRWDC2.IAMTEC.NET, I confirmed that INbound AD replication was disabled. This is needed to be able to authoritatively restore some objects to bring back the deleted partition. If this would not be done, more recent data in the AD Domain/Forest would update the restored DC as if nothing had happened.
Figure 6: Confirming Inbound AD Replication Was Disabled To Prevent Overwriting The Data Of The Non-Authoritative Restore
In this state I wanted to check the state of the objects of interest on both the restored DC R1FSRWDC2.IAMTEC.NET and some other DC R1FSRWDC1.IAMTEC.NET
LEFT WINDOW (RESTORED CROSS-REF OBJECT OF THE PARTITION):
Figure 8a: Comparing NTDS Settings Object Of The Restored DC On Both The Restored DC And Another DC – Differences ExistFigure 8b: Comparing NTDS Settings Object Of The Restored DC On Both The Restored DC And Another DC – Differences Exist
Now lets analyze what needs to be done! Before the non-authoritative restore the application partition “DC=APP_NC_TEST” does not exist on any DC and its definition (i.e. cross-ref object) does also not exist. The question is “how to bring back (i.e., restore) the deleted application partition?”.
One way to restore the deleted the deleted application partition “DC=APP_NC_TEST” is to perform a full AD forest restore using the most recent backup that still contains the application partition. Although a solution, that would be some serious overkill, and let alone the huge impact on the environment!
So, IMHO, the best way to bring back the deleted application partition “DC=APP_NC_TEST” is to use the most recent backup available of any DC that actually hosted the partition, following up with some additional actions.
In my case, I use the DC R1FSRWDC2.IAMTEC.NET to perform an NON-authoritative restore. Now for the follow up actions, my logic is as follows, while using the following code that also executed on the restored DC R1FSRWDC2.IAMTEC.NET (!):
At first we need to bring back the definition of the application partition. That is done by authoritatively restoring the cross-ref object of the deleted application partition.
Figure 9a: Authoritative Restore Of The Cross-Ref Object On The Restored DCFigure 9b: Authoritative Restore Of The Cross-Ref Object On The Restored DC
Now when you look at the data:
The cross-ref object lists which DCs (a.k.a. replica locations) host the application partition
The NTDS Settings object of each DC lists, across multiple attributes, which partitions (a.k.a. NCs) are hosted the corresponding DC
The first bullet has already been authoritatively restored, so that is done! However, the data of the NTDS Settings object of the restored DC on the restored DC itself is older than the data of the NTDS Settings object of the restored DC on any other DC. Check the differences in figure 8a and 8b. On the left you can see the NTDS Settings object shows that R1FSRWDC2.IAMTEC.NET hosts the application partition and on the right you can see the NTDS Settings object shows that R1FSRWDC2.IAMTEC.NET DOES NOT host the application partition. Obviously we want the version on the left to take priority. In other words, if the NTDS Settings object of the restored DC on the restored DC is not authoritatively restored, the restored DC will have the actual application partition, but other DCs WILL NOT be able to replicate from it as its NTDS Settings object does not specify the hosts the application partition. Although specified as a Replica Location, other DCs will not be able to inbound replicate and host the application data as they cannot find another DC that hosts (i.e., has instantiated) the actual partition. The restored DC, although being a Replica Location, having the data, but not having specified it has it instantiated, will not be able to service replication requests from other DCs. Something similar like the following would be seen.
Figure 10: DCs Defined As Replica Locations But Not Being Able To Instantiate The Application Partition
The solution? Also perform an authoritative restore of the NTDS Settings object of the restored DC on the restored DC. After that the NTDS Service again.
Figure 11a: Authoritative Restore Of The NTDS Settings Object Of The Restored DC On The Restored DCFigure 11b: Authoritative Restore Of The NTDS Settings Object Of The Restored DC On The Restored DC
Now you might also think that it is also needed to authoritatively restore the actual partition itself. That is not needed as nothing from that partition exists on the other DCs, and there is nothing to overwrite. The partition and its data are not in a deleted state, but rather have been fully deleted. In short, authoritative restore of the actual partition is not needed.
After the restore, OUTbound AD replication is forced on the restored DC so that the changes propagate to other DCs.
Figure 12: Forcing Outbound AD Replication On The Restored DC For The Changes To Propagate To Other DCs
LEFT WINDOW (NTDS SETTINGS OBJECT OF THE RESTORED DC ON THE RESTORED DC):
Figure 13a: Comparing NTDS Settings Object Of The Restored DC On Both The Restored DC And Another DC – NO Differences ExistFigure 13b: Comparing NTDS Settings Object Of The Restored DC On Both The Restored DC And Another DC – NO Differences Exist
It is now time to enable INbound AD Replication on the restored DC.
Forcing OUTbound AD Replication on the restored DC
REPADMIN.EXE /SYNCALL $ENV:COMPUTERNAME /edjqAP
Figure 14: Forcing Outbound AD Replication On The Restored DC For The Changes To Propagate To Other DCs
Forcing INbound AD Replication on the restored DC
REPADMIN.EXE /SYNCALL $ENV:COMPUTERNAME /edjqA
The error below can be expected if a certain DC has not (yet) instantiated the application partition
Figure 15: Forcing Inbound AD Replication On The Restored DC To Receive Any Updates From Other DCs
As of this point in the process all the other Replica Locations, as defined on the cross-ref object, that have nothing instantiated, will instantiate the application partition and inbound replicate the data from any other DC that already has instantiated the application partition, and in this case it is the DC R1FSRWDC2.IAMTEC.NET.
Checking which Replica Locations exist and which have instantiated the application partition.
NTDSUTIL
ACTIVATE INSTANCE NTDS
PARTITION MANAGEMENT
CONNECTIONS
CONNECT TO SERVER R1FSRWDC1.IAMTEC.NET
Q
LIST NC REPLICAS DC=APP_NC_TEST
Q
Q
Figure 16: Listing The Replica Locations Of The Application Partition And Their Instantiation State – Not (Yet) Complete
As you can see above, not all DCs (i.e., Replica Locations) have completed the operation. Just give it time to complete, especially when having many DCs hosting the application partition and lots of data is involved.
Figure 17: Listing The Replica Locations Of The Application Partition And Their Instantiation State – Completed!
Because of the restore of the application partition and the (re-)instantiation of the application partition, every DC hosting the application partition will report through event ID 1587 in the Directory Services Event Log it is processing the new Invocation IDs of each DC that instantiated the partition. To get assurance the process is working, I only need to see 1 Event on every DC as it otherwise could be an overload of data.
(Get-ADForest).Domains | ForEach-Object {(Get-ADDomain -Identity $_).ReplicaDirectoryServers + (Get-ADDomain -Identity $_).ReadOnlyReplicaDirectoryServers | ForEach-Object {$dcFQDN = $_; Try{$events = Get-WinEvent -ComputerName $dcFQDN -FilterHashtable @{LogName = 'Directory Service'; ID = 1587} -MaxEvents 1 -ErrorAction STOP; $events | Format-Table @{Label="DC FQDN"; Expression={$dcFQDN}},TimeCreated,LevelDisplayName,Message -Wrap -AutoSize} CATCH {Write-Host "$dcFQDN - NO EVENTS YET... PATIENCE AND RETRY (UNLESS IT IS AN RODC, A DECOMMISSIONED DC, ETC)" -ForegroundColor Red}}}
Figure 18a: Every Replica Location (I.e., Every DC Hosting) Of The Application Partition Is Processing The New Invocation ID Of Each That Instantiated The Application PartitionFigure 18b: Every Replica Location (I.e., Every DC Hosting) Of The Application Partition Is Processing The New Invocation ID Of Each That Instantiated The Application PartitionFigure 18c: Every Replica Location (I.e., Every DC Hosting) Of The Application Partition Is Processing The New Invocation ID Of Each That Instantiated The Application PartitionFigure 18d: Every Replica Location (I.e., Every DC Hosting) Of The Application Partition Is Processing The New Invocation ID Of Each That Instantiated The Application Partition
And as I final step, I want to make sure all Replica Location are fully functioning for the application partition. You can easily do that through the PowerShell script SCRIPT: Check-AD-Replication-Latency-Convergence
Figure 19a: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between eachFigure 19b: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between eachFigure 19c: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between eachFigure 19d: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between eachFigure 19e: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between eachFigure 19f: Checking The Restored Application Partition On Every DC Hosting It And The AD Replication Between each
Everything is now back to normal again! Hope this helps someone, if this mistake occurs. Remember, although there are similarities, THESE STEPS CANNOT BE USED FOR RESTORING A DELETED DOMAIN!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
Some time ago, almost 7 years ago, I wrote a PowerShell script to reset the KrbTgt Account Password of both RWDCs and RODCs. This is UPDATE 8 – FINALLY!!!!!!!! There was lots of coding and functional and stress testing of the new and updated components, and both took some serious time. In addition, I also wanted to move the documentation from individual blog post to 1 centralized link containing everything about the script. It contains lots of information and screenshots to show what happens. I also had some moment where I wanted to release it, and then found a bug. Back to the drawing board to recode and…. retest.
More information can be found through the following links regarding the previous updates:
This release (v3.6) (v3.5 was never released) is a MAJOR update with bug fixes, requests from the community and some new stuff.
The main focus was full set and forget automation of the password reset of the KRBTGT accounts in any given AD domain. There are 2 options for automation, and use whatever option works best for you:
[OPTION 1]: Schedule the script to run every given period defined by you, e.g. every day or week at time HH:mm
[OPTION 2]: Schedule the script to run every day at HH:mm and configure and enable the Password Reset Routine. Based on the configuration it will reset the KRBTGT password at specific moments.
Below is the link for the main github page of the script containing everything.
As ALWAYS, TEST FIRST IN A TEST ENVIRONMENT (I DID 😉 )!!!
HAVE FUN!
PS: Got any feedback or request, please use Github to report bugs or requests! Thanks!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
When using either Group Managed Service Accounts (gMSAs) or Delegated Managed Service Accounts (dMSAs) (currently only possible Windows Server 2025 DCs), in addition to some account metadata, the KDS Root Key(s) available in the AD forest play a very important part in the password (re)calculation by the writable DCs (RWDCs) when servicing a managed password request from any server that uses the corresponding gMSA/dMSA. The set of “secret” data is stored in the msKds* attributes of the KDS Root Key Object. While the KDS Root Key object exists on ALL DCs in the AD forest, the secret data is only available on RWDCs (figure 1) and not on RODCs (figure 2). From any KDS Root Key object, the msKds* attributes are all part of the Filtered Attribute Set (FAS). All the attributes that are part of the FAS can be found when querying the Schema Partition using the filter “(&(objectClass=attributeSchema)(searchFlags:1.2.840.113556.1.4.803:=512))“.
Figure 1: The Set Of (Secret) Data Of A KDS Root Key In AD On A RWDCFigure 2: The Set Of Data Of A KDS Root Key In AD On A RODC
If you want to know which KDS Root Key is being used by any given gMSA or dMSA, when that KDS Root Key created, etc, etc, then I suggest you have a look at the following PowerShell code:
Figure 4: Relevant Data Of All The dMSAs In The AD Domain
As the KDS Root Key objects contain the secret data to calculate previous, current or even future passwords using the metadata of ANY gMSA/dMSA account (see: Get-ADDBServiceAccount From DSInternals), you can imagine those KDS Root Key objects should be protected.
By default, nobody needs any access to the KDS Root Keys or their secret data, besides RWDCs for valid purposes, and obviously attackers for bad purposes like Golden gMSA Attack or Golden dMSA Attack.
To gain OFFline access to the KDS Root Key objects, the following actions should be audited/monitored to understand if those are valid or not:
Stopping the NTDS Service on RWDCs
Restarting the RWDC
Creation of IFM Sets on RWDCs through NTDSUTIL or any other way
Creation of snapshots on RWDC through NTDSUTIL or any other way using some VSS Provider
Officially, for ONline access, ONLY the RWDCs need access the KDS Root Key objects to (re)calculate the password of the corresponding gMSA/dMSA when servicing a managed password request from any server that uses the corresponding gMSA/dMSA. Nobody else should have or need access to any KDS Root Key object! But… are you also aware the any online access is NOT audited in any way by default? There is a solution to enable auditing for any access to any KDS Root Key object. This is done by configuring/enabling in ALL AD domains of the AD forest and configuring a SACL on “CN=Master Root Keys”!
STEP1: Configure And Enable Advanced Auditing In EVERY AD Domain In The AD Forest
Make sure to use a GPO that targets all RWDCs of the AD domain
Within the GPO configure: Computer Configuration > Policies > Windows Settings > Security Settgins > Local Policies > Security Options > Audit: Force Audit Policy Subcategory Settings (Windows Vista) To Override Audit Policy Category Settings = ENABLED (figure 5)
Figure 5: Enabling Advanced Auditing
Within the GPO configure: Computer Configuration > Policies > Windows Settings > Security Settings > Advanced Audit Policy Configuration > Audit Policies > DS Access > Audit Directory Service Access = ENABLED for SUCCESS events (figure 6)
Figure 6: Enabling “Audit Directory Service Access” For SUCCESS Events
STEP2: Configure The SACL On The Object “CN=Master Root Keys” In The Configuration NC/Partition
“CN=Group Key Distribution Service,CN=Services,CN=Configuration,DC=<YOUR FOREST ROOT DOMAIN>,DC=<YOUR TLD>”
“CN=Master Root Keys,CN=Group Key Distribution Service,CN=Services,CN=Configuration,DC=<YOUR FOREST ROOT DOMAIN>,DC=<YOUR TLD>”
Right-click on “CN=Master Root Keys,CN=Group Key Distribution Service,CN=Services,CN=Configuration,DC=<YOUR FOREST ROOT DOMAIN>,DC=<YOUR TLD>” > Advanced > Security Descriptor, then check SACL and click OK
You should then see something similar to what you see in the figure 8
Figure 8: Viewing The Configuration Of The SACL On “CN=Master Root Keys” Using LDP
When someone access the attribute “msKds-RootKeyData” of any KDS Root Key object, you should see something similar to figure 9 and figure 10.
Figure 9: A User Account Accessing The Attribute “msKds-RootKeyData” Of A KDS Root Key ObjectFigure 10: A User Account Accessing The Attribute “msKds-RootKeyData” Of A KDS Root Key Object
But… as mentioned earlier, you should also expect RWDCs (figure 11) to access the attribute “msKds-RootKeyData” of a KDS Root Key object to be able to calculate the password of the corresponding gMSA/dMSA as requested.
Figure 11: An RWDC Accessing The Attribute “msKds-RootKeyData” Of A KDS Root Key Object
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
I have not done this for quit some time, but because of a silly mistake I made in my test lab, I needed to rename the domain controller. Because of how it went, I wanted to share the experience of how it went wrong and also how I solved it.
Many years ago, if you wanted to rename a domain controller you had to use NETDOM. Then after some it was also possible to rename through the GUI, which made it easier. Change name, reboot, done! Obviously PowerShell also support renaming a domain controller.
Because of that, that is what I thought and actually did.
Figure 1: Renaming The Domain Controller
Confirmation it t is renamed after the reboot of the Domain Controller.
Figure 2: Confirmation The Rename Of The Domain Controller Is Effective After The Reboot
However, after trying to log in I see the following error, which was not expected. For sure it had to do with the rename of the Domain Controller. Checking with ADUC on other Domain Controllers, I noticed the computer was not renamed as it still had the old name.
Figure 3: Error During Console log On After Renaming The Domain Controller Through The GUI
I needed to fix this to be able to get in (i.e., log on) and try the command line option. So, from another Domain Controller I connected remotely to the registry of the affected Domain Controller and changed some values to the original name (R1FSRWDC2) as you can see below in figure 4a, figure 4b and figure 4c.
Figure 4c: Changing The Name Back In The ComputerName Section
After these changes, I rebooted the DC, and was then able to log in again! Yeah!
Then I wanted to renamed the domain controller through PowerShell and reboot the Domain Controller
Rename-Computer -NewName R1FSRWDC3 -Restart
Figure 5: Renaming The Domain Controller Through PowerShell
The Domain Controller was renamed, rebooted and after that I could log in as expected. Opening ADUC, I could see the computer account name was also renamed (R1FSRWDC3). DONE!
Figure 6: The Renamed Computer Account Of The Domain Controller In ADUC
Lesson learned? – Use PowerShell on the Domain Controller you need to rename!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
Just wanted to check this out myself, see what happens and add some more info. All credit is for Christoffer Andersson for discovering this issue and reporting it to MSFT.
An upgraded DC continues to use its current database format and 8k pages. Moving to a 32k-page database is done on a forestwide basis and requires that all DCs in the forest have a 32k-page capable database.
Figure 1: Information From Microsoft About The 32K Database Page Size Optional Feature
Although being mentioned what happens during an in-place upgrade, it unfortunately does not explicitly mention what the real impact is. You have to read between the lines. This one remembered me again from many years ago why I hate upgrades. Crap in, crap out. Always clean installs!
When having at least 1 DC in the AD Forest with 8K Database Page Size, you CANNOT enable the feature! All DCs with either an empty value (non-W2K25 DC) or the value 8192 (Upgraded W2K25 DC) for the attribute “msDS-JETDBPageSize” on the NTDS Settings object, must either be removed from the AD forest or be replaced by a W2K25 DC that has been added to the AD forest through promotion. However, when using an IFM set to source the promotion, the DC where that IFM set was created MUST already have the 32K Database Page Size enabled for its own database!
To try this out, I used one of my AD Forest with fresh promoted W2K25 RWDCs/RODCs and 1 fresh promoted W2K22 RWDC. I cloned that W2K22 RWDC and isolated it in a separated VLAN. I seized all FSMO roles and cleaned the AD metadata of the other RWDCs/RODCs. The end result was 1 W2K22 RWDC in the AD forest. Good enough for this exercise. Just to be sure: in production always have AT LEAST 2 RWDCs per AD domain!
After all that, I in-place upgraded the W2K22 RWDC to a W2K25 RWDC. Before the upgrade the value for the attribute “msDS-JETDBPageSize” on the NTDS Settings object was empty. After the upgrade, the attribute “msDS-JETDBPageSize” on the NTDS Settings object had the value 8192 (i.e., 8K), see Figure 2.
Figure 2: The Value Of The Attribute “msDS-JETDBPageSize” on the NTDS Settings object
Because the AD forest was upgraded some time ago from W2K22 DCs with DFL/FFL W2K16, I first needed to increase the DFL/FFL to W2K25. By upgrading the FFL, the DFL also came along to W2K25. This went without issues as there was only 1 RWDC and it was running W2K25, see figure 3.
Figure 3: Upgrading The Forest Functional Level From W2K16 To W2K25
As a starting point, lets have a look at the available optional features in the AD forest (figure 4).
As expected that failed with an error, which means that least 1 DC does NOT HAVE 32K Database Page Size. In the Directory Services Event Log you will see the following error (figure 6).
Figure 6: Error In The Directory Services Event Log When Trying To Enable Optional Feature “Database 32k Pages Feature”
Now, let’s have a look at the database info from the NTDS.DIT after the OS upgrade and before the offline defrag of the database. Note the version and the page size (figure 7).
ESENTUTL /m D:\AD\DB\NTDS.DIT
Figure 7: Detailed Info From The NTDS.DIT Of The RWDC BEFORE The Offline Defrag
Now, let’s have a look at the database info from the NTDS.DIT that is used initially during every promotion over the wire. Again, note the version and the page size (figure 8).
ESENTUTL /m C:\WINDOWS\SYSTEM32\NTDS.DIT
Figure 8: Detailed Info From The NTDS.DIT That Is Used For Promotions Over The Wire
Now let’s try to offline defrag the NTDS.DIT after the OS upgrade (figure 9).
StopService NTDS -Force
NTDSUTIL "AC IN NTDS" FILES "COMPACT TO D:\TEMP"
Figure 9: Performing An Offline Defrag On The NTDS.DIT After The UPGRADE To W2K25
Now, let’s have a look at the database info from the NTDS.DIT after the OS upgrade and after the offline defrag of the database. Note the version and the page size (figure 10) and compare with what you see in figure 7.
ESENTUTL /m D:\TEMP\NTDS.DIT
Figure 10: Detailed Info From The NTDS.DIT Of The RWDC AFTER The Offline Defrag
Now the old logs need to be deleted and the defragged NTDS.DIT needs to replace the original NTDS.DIT. After that the NTDS Service can be started again, or at least try to start.
Figure 11: The NTDS Service Fails To Start
Looking for more info in the Directory Services Event Log you will see something similar to what you see in figure 12a, figure 12b, figure 12c and figure 12d.
Figure 12a: More Info Related To The NTDS Service Failing To StartFigure 12b: More Info Related To The NTDS Service Failing To StartFigure 12c: More Info Related To The NTDS Service Failing To StartFigure 12d: More Info Related To The NTDS Service Failing To Start
Now let’s try to reboot the upgraded the W2K25 DC after having defragged its NTDS.DIT, and see what happens.
Figure 13: After Rebooting The DC – AD Fails To Start
HOUSTON? We HAVE A SERIOUS PROBLEM!
The DC is now dead in the water!
Lesson learned? – I still hate in place upgrades!
For others, the recommendation is:
DO NOT in place upgraded your DC to W2K25!
DO NOT promote DCs through IFM, where the IFM Set was created on an upgraded W2K25 DC!
DO promote over the wire
DO promote DCs through IFM, where the IFM Set was created on a non-upgraded (i.e., promoted) W2K25 DC!
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
Previously in Entra ID, only Applications, Users, M365 Groups supported soft delete. Recently Microsoft has extended the list to now also include Cloud Security Groups as you can see below in figure 1.
Figure 1: List Of Objects In Entra ID Supporting Soft Delete
When looking at “Users”, it both means native users in Entra ID and users synched from on-premises AD to Entra ID. So far so good. When looking at “Cloud Security Groups”, and you read too quickly without thinking about it, you might interpreter it as it means means both native groups in Entra ID and groups synched from on-premises AD to Entra ID. Au contraire! The word Cloud Security Group literaly gives it away. The soft deletion is really only for native security groups in Entra ID! Soft deletion for groups synched from on-premises AD to Entra ID is unfortunately still not possible, ;-( Bummer! I truly would have hoped it was also included.
Have a look at figure 2 below. You will see 3 groups:
“__CLOUD_NATIVE_GROUP” – being the so called Cloud Security Group created and managed in Entra ID
“__HYBRID_CLOUD_SYNC_GROUP” – being the group synched from on-premises AD through Cloud Sync
“__HYBRID_CONNECT_SYNC_GROUP” – being the group synched from on-premises AD through Connect Sync
Figure 2: List Of Groups In Entra ID
To test soft deletion support for all 3 groups, I did the following:
“__CLOUD_NATIVE_GROUP” – Deleted in Entra ID manually
“__HYBRID_CLOUD_SYNC_GROUP” – removed from the scope of sync configured for Cloud Sync
“__HYBRID_CONNECT_SYNC_GROUP” – removed from the scope of sync configured for Connect Sync
In all 3 cases the groups will be deleted from Entra ID. The question is, which one(s) end up in the Recycle Bin for groups in Entra ID (i.e., supporting soft deletion). You can see the answer in figure 3, which unfortunately is only the Cloud Security Group “__CLOUD_NATIVE_GROUP”. That one is soft deleted. The other 2, “__HYBRID_CLOUD_SYNC_GROUP” and “__HYBRID_CONNECT_SYNC_GROUP” are hard deleted as those are not visible in the Recycle Bin for groups in Entra ID.
Figure 3: Recycle Bin For Groups
To further prove this group “__CLOUD_NATIVE_GROUP” was manually undeleted, i.e., restored, and the 2 groups “__HYBRID_CLOUD_SYNC_GROUP” and “__HYBRID_CONNECT_SYNC_GROUP” were brought back into the scope of sync of their respective sync engines. The end result is displayed in figure 4.
Figure 4: List Of Groups In Entra ID
When you pay close attention looking at figure 4, you can see the object id of the group “__CLOUD_NATIVE_GROUP” is the same as the object id of the same group displayed in figure 2. For the groups “__HYBRID_CLOUD_SYNC_GROUP” and “__HYBRID_CONNECT_SYNC_GROUP” in figure 4 you will be able to see both have a new object id which means those have been recreated by the respective sync engine.
Conclusion is still therefore to be careful with groups synched from on-premises AD. A small mistake, or even a Forest Recovery of AD without performing a GAP analysis will result in hard-deleted objects in Entra ID that cannot be recovered without impact.
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################
Today, exactly 20 years ago I started blogging and created my “Jorge’s Quest For Knowledge” blog, which has been and still is being read by many people around the world. My biggest drive and passion to do this is to research and acquire knowledge, and then share it with others around the globe so that those people can use the information to their best usage in their day-to-day jobs. The very first post was on Tuesday November 8th 2005.
It has been so much fun to do this, and I hope it remains fun to continue to do this for the next ten years!
A very big THANK YOU to all my readers/visitors
Cheers,
Jorge
————————————————————————————————————————————————————- This posting is provided “AS IS” with no warranties and confers no rights! Always evaluate/test everything yourself first before using/implementing this in production! This is today’s opinion/technology, it might be different tomorrow and will definitely be different in 10 years! DISCLAIMER: https://jorgequestforknowledge.wordpress.com/disclaimer/ ————————————————————————————————————————————————————- ########################### IAMTEC | Jorge’s Quest For Knowledge ########################## #################### https://jorgequestforknowledge.wordpress.com/ ###################