Tag: Powershell

Starting Azure automation runbooks programmatically

Starting Azure automation runbooks programmatically

Azure automation runbooks are one of the chosen ways to provision Microsoft Teams or SharePoint sites inside Microsoft 365. A content of the runbooks normally favorites with PnP PowerShell and that’s well documented. But except from a scheduled start that pulls a request from a list how else can runbooks or (better!) corresponding jobs be started? (of course this way of starting is valid for any other runbook scenario)

In the past, there were a meanwhile deprecated Azure Automation SDK and a web hook scenario which could be used to start at runbook on demand. Now there are the new Azure Resource management Automation SDK and the parallel Rest API and this post describes how to use them.

Content

The client

First a client needs to be established, especially to validate security options. Having that client further operations can be full filled. In the following sample code an InteractiveBrowserCredential is used inside the DefaultAzureCredential. But as an alterative the EnvironmentCredential is prepared.

// For Dev and local env
Environment.SetEnvironmentVariable("AZURE_TENANT_ID", config["AZURE_TENANT_ID"]);
Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", config["AZURE_CLIENT_ID"]);
Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", config["AZURE_CLIENT_SECRET"]);
ArmClient client = new ArmClient(new DefaultAzureCredential(true)); // Enable interactive as well
            

Under security it is mentioned which roles or permissions are needed. Of course this also differs from the desired operations.

Create

An automation job will be created under the jobs of an existing automation account. So the access to this one has to be established first.

var automationAcc = client.GetAutomationAccountResource(new ResourceIdentifier(config["automationAccount"]));
var automationJobs = automationAcc.GetAutomationJobs();

After that, the job needs some parameters. First there is the name of the runbook, potentially some custom parameters needed by the runbook and finally the RunOn parameter.

var jobParameters = new Azure.ResourceManager.Automation.Models.AutomationJobCreateOrUpdateContent()
{
    RunbookName = config["runbookName"],
    RunOn = ""
};
// Using hard-coded parameters here
string alias = "TeamAlias";
jobParameters.Parameters.Add("displayName", "Team Name");
jobParameters.Parameters.Add("alias", alias);
jobParameters.Parameters.Add("teamDescription", "Team Description");
jobParameters.Parameters.Add("teamOwner", config["teamOwner"]);
var automationJob = automationJobs.CreateOrUpdate(Azure.WaitUntil.Started, $"Creation of {alias}", jobParameters);

Having that the job can be started. It must be clear that this is only the start with no guarantee on the result. So another loop where to detect if the job completed successfully makes sense.

Check for completion

int count = 0;
while (count < 10)
{
    var newAutomationJob = automationAcc.GetAutomationJob($"Creation of {alias}");
    if (newAutomationJob.Value.Data.Status == AutomationJobStatus.Completed)
    {
        Console.WriteLine($"Job Ended {automationJob.Value.Id}");
        break;                       
    }
    if (newAutomationJob.Value.Data.Status == AutomationJobStatus.Failed || newAutomationJob.Value.Data.Status == AutomationJobStatus.Stopped)
    {
        Console.WriteLine($"Job Ended unsuccesful {automationJob.Value.Id}");
        break;
    }
    count++;
    Thread.Sleep(30000);
}

The counter and sleep timer are assumed values here. But the pattern is pretty much clear. In a loop the job is checked for completeness or failed state and the output is given.

Security considerations

How does the whole operation work from a security perspective? The caller can be a user (interactively) or an app (unattended): DefaultAzureCredential works best with it inside the code.

What is mainly necessary is the Role Assignment (or a subset of permissions): Automation Job Operator for any kind of user or app registration trying to fulfill this kind of service operation.

Role Assignments given to an app or a user to being able to start a runbook job
Role Assignments given to an app or a user to being able to start a runbook job

In case you want to continue inside the runbook job on behalf of the starting user there is the problem that the caller is not identifiable inside the runbook (token) except explicitly transported through parameters.

When using the rest API you must understand that it cannot be done directly from a client context because Azure Resource Management Rest API does not support CORS. So coming from a client context such as a SharePoint framework (SPFx) e.g. you first need to call a back end process and there you have the choice to use .Net or Rest.

Rest Client

Alternatively, to the .Net SDK there is also the possibility to perform the operations with the Rest API. First a Http Client needs to be established with a Bearer Token based on a simple Entra ID app registration. For Azure there is no granular permission model in Entra ID (see scope user_impersonation below). This is handled inside the resource model so here only a simple permission is set and later the app is given permissions role based (RBAC) as seen under security.

Simple Entra ID app registration permission for Azure Resource Management
Simple Entra ID app registration permission for Azure Resource Management
var tokenCredential = new DefaultAzureCredential(true);
var accessToken = tokenCredential.GetToken(new Azure.Core.TokenRequestContext(["https://management.azure.com/user_impersonation"])).Token;
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

Having a client and a token here the request can directly start, assuming a subscription, resource group and account name are given:

Create (Rest)

string groupAlias = "TeamAlias";
JobStartRequest request = new JobStartRequest()
{
    properties = new JobProperties()
    {
        runbook = new RunbookProperties()
        {
            name = config["runbookName"]
        },
        parameters = new JobParameters()
        {
            groupAlias = groupAlias,
            displayName = "Team Name",
            teamDescription = "Team Description",
            teamOwner = config["teamOwner"]
        },
        runOn = ""
    }                
};
var jsonRequest = JsonSerializer.Serialize(request);
string jobStartUrl = $"https://management.azure.com/subscriptions/{config["subscriptionID"]}/resourceGroups/{config["resourceGroupID"]}/providers/Microsoft.Automation/automationAccounts/{config["automationAccount"]}/jobs/Creation{groupAlias}?api-version=2023-11-01";
var jobStartReq = new HttpRequestMessage(HttpMethod.Put, jobStartUrl);
jobStartReq.Content = new StringContent(jsonRequest, Encoding.UTF8);
jobStartReq.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
var jobStartResult = await client.SendAsync(jobStartReq);
jobStartResult.EnsureSuccessStatusCode();
var jobStartContent = await jobStartResult.Content.ReadAsStringAsync();
var createdAutomationJob = JsonSerializer.Deserialize<AutomationJob>(jobStartContent);
string jobName = createdAutomationJob.name;

Although the client above was easier to initialize here it looks like a bit more code. But take into account that now using rest you are responsible for deserializing the results. For deserializing I used reduced classes orientating on the Rest documentation.

Check for completion Rest

Also with Rest a check for completion of the job can be done of course. The the pattern is the same as above. Only the calls are slightly different.

string jobName = createdAutomationJob.name;
int count = 0;
while (count < 10)
{
    string jobCheckUrl = $"https://management.azure.com/subscriptions/{config["subscriptionID"]}/resourceGroups/{config["resourceGroupID"]}/providers/Microsoft.Automation/automationAccounts/{config["automationAccount"]}/jobs/{jobName}?api-version=2023-11-01";
    var checkReq = new HttpRequestMessage(HttpMethod.Get, jobStartUrl);
    var jobCheckResult = await client.SendAsync(checkReq);
    jobCheckResult.EnsureSuccessStatusCode();
    var ceckContent = await jobCheckResult.Content.ReadAsStringAsync();
    var newAutomationJob = JsonSerializer.Deserialize<AutomationJob>(ceckContent);
    if (newAutomationJob.properties.status == "Completed")
    {
        Console.WriteLine($"Job Ended {newAutomationJob.properties.jobId}");
        break;
    }
    if (newAutomationJob.properties.status == "Failed" ||
      newAutomationJob.properties.status == "Stopped")
    {
        Console.WriteLine($"Job Ended unsuccesfully {newAutomationJob.properties.jobId}");
        break;
    }
    count++;
    Thread.Sleep(30000);
}
Console.ReadLine();

Using the job name from the original start it now gets requested again. The job object is then checked for its status and if it’s not yet done in any case the loop continues.

As usual there is also a full sample code repository on GitHub illustrating the shown things in .Net SDK as well as in Rest API.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Demystifying Teams creation in Microsoft 365 (2023)

Demystifying Teams creation in Microsoft 365 (2023)

Back in 2019 I first publicly talked about my (year lasting) experience about provisioning of Office Microsoft 365 Groups, at those days also transformed into Microsoft Teams in a public community call. After that time and preparing for an also significant change in Microsoft SharePoint security now I want to look back what changed till then.

My very first public community demo – Provision Teams with Azure Automation

Series

Content

Create Teams

There are two approaches to create a Team in Microsoft 365. At first create it based on an existing unified Group and second directly create a Team. I first also wrote about the difference in 2019 directly in front of my (very first btw, so sorry for the nervousness or take it as encouragement to try and start yourself) community demo.

Create Group first

To create a Group first a minimum of the following body needs to be posted against the /groups API.

POST https://graph.microsoft.com/v1.0/groups
{
    "displayName": "Blog Group 1",
    "description": "Decription for Blog Group 1",
    "groupTypes": [
        "Unified"
    ],
    "mailEnabled": true,
    "mailNickname": "Blog1",
    "securityEnabled": false,
    "visibility": "Private",
    "owners": "@odata.id": "https://graph.microsoft.com/v1.0/users/<UserID>"
    "members": "@odata.id": "https://graph.microsoft.com/v1.0/users/<UserID>"
}

Some things are necessary here. For a later Team creation a need to groupTypes Unified, mailEnabled true and securityEnabled false. A display name and mailNickname are required anyway. It is optional but makes sense to already add users as owners and members like shown here.

The user value (or other valid ones) can alternatively be put to an array [] to add more than one.

Once a group gets created with that endpoint it exists with a unique ID. This ID is later also used by the corresponding Team, but is not yet available on the teams endpoint. In PnP PowerShell the same operation could look like this:

New-PnPMicrosoft365Group 
-DisplayName "Blog Group 1"
-Description "Decription for Blog Group 1"
-MailNickname "Blog1"
-IsPrivate:$true
-CreateTeam:$true # Includes Teamify, see below

Teamify

To extend a created group as a Team another small call needs to be executed against the /teams endpoint.

PUT https://graph.microsoft.com/v1.0/groups/{id}/team
{
"template@odata.bind": "https://graph.microsoft.com/v1.0/teamsTemplates('standard')",
"group@odata.bind": "https://graph.microsoft.com/v1.0/groups('<YourGroupID>')"
}

That body as a POST request is all needed here. Microsoft recommends to wait up to 15 minutes after the group is created. But in practice, I didn’t experience it a long time like this. 1-5 minutes seemed sufficient but if the desire is there to be on the safe side /groups endpoint can be checked in a loop for full existence of the group.

The body speaks for itself. Of course, a group id is required but also a Teams template. This will be not covered in detail in this post, so ‘standard’ is used here.

In two steps now a Team beginning with a Group was created. Next it is shown how this can happen in one step, what’s the difference and if that’s really the better idea.

Create Team directly

To directly create a Team a minimum of the following body shall be posted against the team’s api.

{ 
"template@odata.bind": "https://graph.microsoft.com/v1.0/teamsTemplates('standard')",
"displayName": "Blog Team 1",
"description": "Blog Team 1 Description",
"members": [
{
"@odata.type": "#microsoft.graph.aadUserConversationMember",
"roles": [
"owner"
],
"user@odata.bind": "https://graph.microsoft.com/v1.0/users('<UserID>')"
}
]
}

Looks pretty handy compared with the Group request above. But what are the consequences? We have a single member who is owner as well (necessary of course) and no further control over underlying Group’s settings. So only two settings added from above to try a bit:

{
    "template@odata.bind": "https://graph.microsoft.com/v1.0/teamsTemplates('standard')",
    "displayName": "Blog Team 1",
    "description": "Blog Team 1 Description", 
    "members": [ 
        { 
            "@odata.type": "#microsoft.graph.aadUserConversationMember", 
            "roles": [ 
                "owner" 
            ], 
            "user@odata.bind": "https://graph.microsoft.com/v1.0/users('<UserID>')" 
        } ,
        {
            "@odata.type": "#microsoft.graph.aadUserConversationMember",
            "roles": [
                "member"
            ],
            "user@odata.bind": "https://graph.microsoft.com/v1.0/users('4186d8d8-d13e-48d4-987d-62ba19d7fc61')"
        }
    ],
    "mailNickname": "Blog1"
}

And Bummer!:

We are not really able to add more than one owner here? Microsoft, you cannot be serious… Also the mailNickname is set different than desired by the request. Just if this “Group relevant” setting is important for the result. And it is also the Email of the default channel there is no simple control over?

If you try to “hijack” the ‘General’ channel like this there is a move forward but the intension here is not to overengineer but to show the complexity of a Team object.

"channels": [
        {
            "displayName": "General",
            "isFavoriteByDefault": true,
            "description": "This is a descr",
            "email": "Blog1@yourtenant.onmicrosoft.com"
        }
    ]

Works …🆗

And for a better understanding in most cases I like the top-down strategy so why not start with a Group and then adjust what needs to be adjusted from a Team?

In PnP PowerShell, the same operation could look like this:

New-PnPTeamsTeam -DisplayName "Blog Group 1" 
-Description "Decription for Blog Group 1"
-MailNickname "Blog1"
-Visibility Private
-Owners "mail1@yourdomain.onmicrosoft.com"
-Members "mail1@yourdomain.onmicrosoft.com" "mail2@yourdomain.onmicrosoft.com"

But although it states to “directly” create a Teams Team the documentation clearly says: If the Microsoft 365 Group does not exist yet, it will create it first and then add a Microsoft Teams team to the group (as it is open source you can also dig into the source code, which is by the way a great idea to learn from the real coding Rockstars anyway). This is the same approach than above and nothing to worry about as per Microsoft’s Best Practices.

They also “magnificently” handle adding one or more than owners/members here (as well as with the group first approach) 😉

As a conclusion especially for Microsoft Graph (which should include most plain or SDK development approaches) also Microsoft clearly says:

All teams are backed by Microsoft 365 groups. The quickest way to get your team up and running when you create new teams via Microsoft Graph is to set up a new Microsoft 365 group, all owners and members, and convert that into a team.

While I must admit that document is slightly older than the lastet version of the Teams Api documentation. Nevertheless, “hey Microsoft, at least keep your documentation consistent” 😉

Members and Owners

As shown above, during the creation process, it is good practice to already add owners and members to the group. For this, it is essential that access is available to those resources. If users shall be added at least access to users is required. For Groups or other further service principals groups or the directory need to be accessed. Requests need to be done against the members or owners endpoint of the specific group.

POST https://graph.microsoft.com/v1.0/groups/<GroupID>/members
POST https://graph.microsoft.com/v1.0/teams/<GroupID>/members

Part of the body then looks familiar from above:

{
"@odata.id": "https://graph.microsoft.com/v1.0/users/<UserID>"
}

The user value (or other valid ones) can alternatively put to an array [] to add more than one.

Permissions

But what about permissions because especially provisioning scenarios often run in unattended mode so application permissions which are much more sensitive than delegated ones need to be applied.

CreateAdd initial Members / OwnersTeamifyMaintain
Group.CreateUser.Read.All
Directory.Read.All
+ Team.Create Group.ReadWrite.All 
Directory.ReadWrite.All 
Team.Create User.Read.All
Directory.Read.All
N / AGroup.ReadWrite.All 
Directory.ReadWrite.All
TeamSettings.ReadWrite.All,
TeamSettings.ReadWrite.Group
Necessary permissions for Groups/Teams creation and maintenance

For the creation column, this does not look very critical also “Teamify” looks like an interesting combination and in fact works least privileged exactly that way, but in the other columns, especially on the maintain part, it gets more sensitive. Group.ReadWrite.All should be marked bold on the All part. In fact, think about the consequences. It’s not only about Microsoft 365 unified Groups, but also security groups, or even directory objects that you get access to and then All of them.
The TeamSettings permissions only apply for the specific Teams (fun e.g.) settings. The more important also group relevant settings either need the older Group.ReadWrite.All or the new TeamSettings.ReadWrite.Group permission. This is covered in another upcoming post concerning resource specific consent because only in that combination it can be used.

Summary

So to answer the original question what is there in the meantime coming from 2019. It is still best practice to create a group 1st and then make a team out of it (“teamify”). In the time of so called ZeroTrust scenarios a closer look at the permissions should be taken. So that section in this post should get most attention. The “Create.***” permissions are a step forward but sadly not enough. Something more on this will be covered in my next post. For now I need to leave you still with the requirement for Group.ReadWrite.All or Directory.ReadWrite.All which might at least partially (TeamSettings.ReadWrite.Group is better than Group.ReadWrite.All but does not help in general cases) be significantly more than you want to do in a Teams creation and maintenance process. 🤷🏼‍♂️

Comparison with the new Teams API leads to a poor experience from scratch respectively complex handling of detailed settings. The latter one was not yet a big win so far. I personally will stick to Microsoft’s recommendation on creating a Group first so far but let’s see in part 2 if there is another benefit with resource specific consent (RSC) in this or that direction as I mostly care about security in such scenarios.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Modern SharePoint content type publishing – Manually or automated (PnP)

Modern SharePoint content type publishing – Manually or automated (PnP)

For my upcoming Microsoft 365 development sample project I need a SharePoint content type. This post intends to show how to quickly create one, but also make it deployable and reproducible. Therefore, I demonstrate a combination of clicking in the UI and PnP provisioning. A way from a rapid prototype towards an enterprise-ready and stageable solution.

Series

Content

Columns

A Sharpoint column should be created as a site column first. This can be done from the site settings.

Site columns listed in Site settings ​
Site columns listed in Site settings
Create new site column​
Create new site column

There are various column types, which I will not explain in detail here. But what they all have in common is a proper naming convention. What I always do, even when creating columns in the UI, is avoiding blanks in names on creation. This is because on creation not only the display name is set but also internal and static names. And internal names containing blanks are horrible, especially for developers, and a potential case for amnesty international. 🤷🏼‍♂️

So, creating a column with the initial name Offering Submitter would go with an internal name of Offering_x0020_Submitter for instance which is much harder for developers to address.

If the column name is changed in a second step, only the display name changes, the internal name stays as is.

Edit site column (Display Name)​
Edit site column (Display Name)

This was the UI approach, but in a PnP provisioning template a similar column would look like the following:

<pnp:SiteFields>
        <Field ID="{0331efb1-e3d0-4180-ac8d-d9ded4935e63}" 
                   StaticName="OfferingReviewer" 
                   Name="OfferingReviewer"
                   DisplayName="Offering Reviewer"
                   Type="User" 
                   List="UserInfo" 
                   ShowField="ImnName" 
                   UserSelectionMode="PeopleOnly" 
                   UserSelectionScope="0" 
                   Group="MM" />
		    ....
</pnp:SiteFields>

Content Type

The SharePoint content type gallery was recently modernized. Nevertheless, it’s still reachable via site settings and the creation of a content type is still the same.

New Site Content Type gallery
New Site Content Type gallery

A content type is simply created by deriving from a parent content type, given a name and adding site columns.

Add site column to content type
Add site column to content type
Add site column to content type - Pick
Add site column to content type – Pick

In a PnP provisioning template the same would look like the following:

 <pnp:ContentTypes>
        <pnp:ContentType ID="0x0101003656A003937692408E62ADAA56A5AEEF" Name="Offering" Description="" Group="MM" NewFormUrl="" EditFormUrl="" DisplayFormUrl="">
          <pnp:FieldRefs>
            ....
            <pnp:FieldRef ID="0331efb1-e3d0-4180-ac8d-d9ded4935e63" Name="OfferingReviewer" UpdateChildren="true" />
			....
          </pnp:FieldRefs>          
      </pnp:ContentType>
</pnp:ContentTypes>

Using a content type hub would not change anything in the creation or modification of the content type itself. Therefore, this is not discussed here any further.

List association

Last not least the content type needs to be assigned to a list or a library. Therefore, the list or library needs to be enabled for custom content types first.

Library settings (Advanced) - Enable content types​
Library settings (Advanced) – Enable content types

Next our custom content type can be added to the list or library, and potentially existing standard content types can be removed as well.

Add existing site content type to list/library
Add existing site content type to list/library

Once again in a PnP template the same would look like the following:

<pnp:Lists>
        <pnp:ListInstance Title="{parameter:DocumentsTitle}" Description="" OnQuickLaunch="true" TemplateType="101" Url="{parameter:DocumentsUrl}" ContentTypesEnabled="true" EnableVersioning="true" MinorVersionLimit="0" MaxVersionLimit="500" DraftVersionVisibility="0" TemplateFeatureID="00bfea71-e717-4e80-aa17-d0c71b360101" EnableAttachments="false" ListExperience="NewExperience" ImageUrl="/_layouts/15/images/itdl.png?rev=45" IrmExpire="false" IrmReject="false" IsApplicationList="false" ValidationFormula="" ValidationMessage="">
          <pnp:ContentTypeBindings>
            <pnp:ContentTypeBinding ContentTypeID="0x0101003656A003937692408E62ADAA56A5AEEF" />
            <pnp:ContentTypeBinding ContentTypeID="0x0101" Remove="true" />
          </pnp:ContentTypeBindings>
  </pnp:ListInstance>
</pnp:Lists>  

Document template

A special thing for libraries but not for lists are document templates. Those can be set on content type level and each time a new file is created based on that content type the given template is used.

Advanced Content Type settings - Document Template ​
Advanced Content Type settings – Document Template

An empty template is set automatically and by default it is located in the hidden _cts site folder:

Content Type‘s Document Template - Location
Content Type‘s Document Template – Location

As the template’s sub folder and template file name are the same than the content type name by default PnP provisioning can simply upload or override that existing file:

<pnp:ProvisioningTemplate>
....
      <pnp:Files>
        <pnp:File Src="Offering.dotx" Folder="_cts/Offering/" Overwrite="true" />
      </pnp:Files>
</pnp:ProvisioningTemplate>

Pay attention that this is only possible for the given location (_cts hidden folder) when custom script is allowed on the target site. Therefore, during deployment I regularly allow this temporarily by PnP PowerShell cmdlet Set-PnPSite -NoScriptSite $false while finally revoking this setting again with Set-PnPSite -NoScriptSite $true

From prototype to production

If there is no reference already, the fastest way is prototyping columns and content types in the UI as described above. Also, the assignment to the library and the adjustment of the document template can be done that way. But for a reproducible solution that can pass several stages from dev to production for instance, a deployable PnP template is the better approach.

So regularly, after having the first satisfying prototype in the UI all the customizations can be persisted in a PnP template. Then they can be deployed reproducible, and also slightly adjusted.

At first, with the following PnP PowerShell cmdlets you can retrieve an initial PnP template from your prototype site having all the customization:

$siteUrl = "https://Your-Tenant.sharepoint.com/teams/Offerings/"
Connect-PnPOnline -Url $siteUrl -Interactive
# -Handlers is optional to concentrate on relevant custumizations and ignore others
Get-PnPSiteTemplate -Out "C:\temp\pnp_templates\Offerings_In.xml" -Handlers Fields,ContentTypes,Lists

Next step would be polishing. Although only custom elements should be in the template you still might find customizations not applicable to your solution. Simply remove them. I also clean up especially the site columns by removing default things or site related stuff that will be rewritten anyway such as:

Required="FALSE" # default value 
EnforceUniqueValues="FALSE" # default value
Indexed="FALSE" # default value
SourceID="{11efd306-e887-49be-89b6-765abdd82df0}" # site related
LCID="1033" # should come from site settings

Next, I also tend to have always the same order of attributes of the site columns. I always start with the ID and the name because this way I can easily copy and paste this to the referenced fields <pnp:FieldRef /> inside the content type where only ID and name is needed.

<Field ID="{03f60e71-3844-448a-8629-3c6c8c7f603d}" 
       Name="OfferingDescription" 
       StaticName="OfferingDescription" 
       DisplayName="Offering Date" 
       Type="DateTime"
     ....

For your reference, you can find the full PnP template in my GitHub repository of my upcoming next Microsoft 365 dev sample.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Increase performance in Azure Automation with Microsoft Graph delta approach

Increase performance in Azure Automation with Microsoft Graph delta approach

In my last post I showed a pattern how to run jobs on a large amount of resources with Azure Automation. Another option is to reduce this large amount of resources as much as you can. Once it comes to resources retrieved by Microsoft Graph there is an option for that, the so called delta approach.

Meanwhile there is a significant number of resources supporting that but here is a small example on Groups and also for Microsoft Teams as the Groups object is representing the “base” of each Team. But the general pattern is always quite the same:

  • You call the /delta function on the given list endpoint such as https://graph.microsoft.com/v1.0/groups
  • If there is a nextLink (Paging) you iterate till the end
  • Beside the result of all items you now have a deltaLink
  • Next time you make a request on that deltaLink and you will only receive items that changed since the last request
  • Then you have a next deltaLink and so on

I wrote this as a sample PowerShell script dedicated to an Azure Automation runbook but it should be easily transferable to any other kind of application as the Graph calls for instance all are Rest based.

param (
    [Parameter(Mandatory=$false)]
    [bool]$Restart=$false
)
$creds = Get-AutomationPSCredential -Name '<YourGraphCredentials_AppID_AppSecret>'

$GraphAppId = $creds.UserName 
$GraphAppSecret = $creds.GetNetworkCredential().Password
$TenantID = Get-AutomationVariable -Name '<YourTenantIDVariable>'

$resource = "https://graph.microsoft.com/"
$ReqTokenBody = @{
    Grant_Type    = "client_credentials"
    Scope         = "https://graph.microsoft.com/.default"
    client_Id     = $GraphAppId
    Client_Secret = $GraphAppSecret
}

$loginUrl="https://login.microsoftonline.com/$TenantID/oauth2/v2.0/token"
$TokenResponse = Invoke-RestMethod -Uri $loginUrl -Method POST -Body $ReqTokenBody
$accessToken = $TokenResponse.access_token

$header = @{
    "Content-Type" = "application/json"
    Authorization = "Bearer $accessToken"
}

$deltaLink = Get-AutomationVariable -Name 'GroupsDeltaLink'
if ($Restart -or [String]::IsNullOrEmpty($deltaLink))
{
    $requestUrl = "https://graph.microsoft.com/v1.0/groups/delta"
}
else
{
    $requestUrl = $deltaLink
}    
$response = Invoke-RestMethod -Uri $requestUrl -Method Get -Headers $header

[System.Collections.ArrayList]$allGroups = $response.value
while ($response.'@odata.nextLink') {
    $response = Invoke-RestMethod -Uri $response.'@odata.nextLink' -Method Get -Headers $header
    $allGroups.AddRange($response.value)
}
$newDeltaLink = $response.'@odata.deltaLink'
Write-Output "$($allGroups.Count) Groups retrieved"
Write-Output "Next delta link would be $newDeltaLink"

Set-AutomationVariable -Name 'GroupsDeltaLink' -Value $newDeltaLink

Write-Output "Groups Results: "
foreach($group in $allGroups)
{
    if ($group.groupTypes -and $group.groupTypes.Contains("Unified")) # filter for "Unified" (or Microsoft 365) Groups
    {
        Write-Output "Title: $($group.displayName)"
    }
}
# Change some groups / teams by
    # Adding / Removing members
    # Edit Title
    # Edit Description
    # Change Visibility from Public to Private or vice versa
# Re-run the runbook with Restart=$false and you will only receive the delta, that is the changed groups

Till line 26 it is all about authentication and grabbing the access token. (“Groups.Read.All” would be the required app-permission in this case for the application registration.)
Then we try to retrieve a stored deltaLink from another automation account variable. If that is null or empty or if the runbook was enforced to do a “restart” an initial delta call against the groups endpoint is done via https://graph.microsoft.com/v1.0/groups/delta. If a working deltaLink shall be used instead the request is executed against that URL. In both cases afterwards the request is repeated and the result is aggregated as long as there are additional “pages” available on server-side as indicated by an existing “@odata.nextLink”. Once that is not available anymore and therefore the last (or only) “page” of results is retrieved another “@odata.deltaLink” will be there. That gets persisted to the automation account variable again for the next run.

The whole result can be iterated then in the foreach loop and you can do with the groups whatever you need to do.
As you might know from the /groups endpoint in general it does return Security Groups as well and there is a $filter for that (?$filter=groupTypes/any(c:c+eq+'Unified')). Unfortunately that $filter is not yet supported together with the delta function and therefore “Unified” Groups need to be evaluated at this point. But as long as the delta request returns a significant reduction of the overall result this would still be much faster.

Also pay attention that those deltaLinks are outdated sometimes. I faced that issue for instance on jobs that did not run due to undetected error conditions for a while after 30 days already. Nevertheless I hope this little sample helps to increase your runtime and/or eventually reduce your server workload on some of your operations.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Long running jobs on SharePoint and M365 resources with Azure Automation

Long running jobs on SharePoint and M365 resources with Azure Automation

Not only in large tenants there might be the need to run jobs on lots of your SharePoint or other Microsoft 365 resources (such as Teams or Groups, OneDrive accounts and so on). In this blog post I will show you how this can be achieved with Azure Automation and how to overcome the limit of 3hrs runtime per Azure Automation job.

Architecture

Many large organizations own a 5 or 6-digit number of site collections. Not to talk about even more or other resources such as document libraries or OneDrive accounts. If there is the need to operate on all of them either for reporting purposes or to update/manipulate some of the existing settings it can end in very long runtime. Azure Automation regularly is a good candidate for those kind of operations, especially when (PnP) PowerShell is your choice. As the maximum runtime for one single job / runbook is about 3 hours there is the need to split up the whole operation (on a large number of resources) into several jobs. Assume the whole job on one resource needs 90s (1.5mins). Then you should not consider to handle more than 100 (officially 120 but let’s keep a buffer) with one job. To evaluate the whole number of resources to run on and to kick off those individual jobs would be the responsibility of one parent job. Attention needs to be paid on some limits here: For instance there cannot run more than 200 jobs in parallel but later jobs might be queued so long. But also no more than 100 jobs can be submitted per 30s, more would fail. So submission loops should take there time. If in doubt use Start-Sleep -s with low seconds value.
It needs to be ensured that each job has it’s own runtime so limits affect each one separately. The runbook architecture might look like this:

Parent / child architecture for runbooks

Additionally shown is a central logging / reporting capability. This will be explained at a later point in this post.

Parent runbook

The parent runbook has 2 tasks.

  1. Evaluate the resources to be worked on
  2. Kick off the child runbook several times with a subset of resources

Evaluate resources

This could be done on several ways depending on your specific scenario. But I guess you already know about it and here you find only two examples: Get-PnPTenantSite for retrieving (all or all specific) site collections and Search with Submit-PnPSearchQuery for retrieving sites, lists or other elements.

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

Connect-PnPOnline   -CertificatePath $global:CertPath `
                    -CertificatePassword $global:SecPassword `
                    -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                    -ClientId $global:ClientID `
                    -Url $global:tenantConfig.Settings.Tenant.TenantURL

# Here the options retrieving resources
# Option 1: Get all sites
$Sites = Get-PnPTenantSite -Detailed

foreach($Site in $Sites)
{
    ....
}

# Option 2: Use search to evaluate all sites
$query = "contentclass:STS_Site"
# $query = "contentclass:STS_Site AND WebTemplate:SITEPAGEPUBLISHING" # Only search modern Communication Sites
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","Path","SiteId")

foreach ($row in $result.ResultRows)
{
    $row.Path # ... the site url
}

# Option 3: Use search to evaluate all document libraries
$query = "contentclass:STS_List_DocumentLibrary"
$result = Submit-PnPSearchQuery -Query $query `
                                -All `
                                -TrimDuplicates:$false `
                                -SelectProperties @("Title","ParentLink","SiteId","ListId")
foreach ($row in $result.ResultRows)
{
    $row.ParentLink # parent web url for establishing later site connection
    $row.ListId # library id for  Get-PnPList -Identity ...
}

Similar to my post on modern authentication in azure automation with pnp powershell first is the connection to the SharePoint tenant. Next resources on several options are retrieved and finally resources are iterated and put into bunches for the child runbook. The # and calculation for this depends on your expected # of resources and expected runtime per resource item.

Option 1 is quite simple and needs no further illustration.
Option 2 uses search to retrieve sites and here two parameters are very important. Against a user based search where only the most relevant results might be important here the main aspect is on completeness. Therefore we need to retrieve -All results (and automatically overcome paging) but also set -TrimDuplicates to $false to not ignore potential similar sites which in fact are individual resources.
Option 3 is quite similar to option 2 but uses another content class to retrieve all document libraries.

Kick off child runbook

Inside the foreach loop above it’s time to kick off the child runbook. To have this as an individual process with independent runtime limit it cannot simply be called as a “sub-script” as I did in my Teams provisioning series. Instead it needs to be started as separate runbook with authentication to the automation account before:

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

$countSites = 0
$batchSize = 25;
$SiteUrls = @();

foreach($Site in $Sites)
{
    $countSites++;
    $SiteUrls += $Site.Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}

        Start-AzAutomationRunbook `
            -Parameters $params `
            -AutomationAccountName "<YourAutomationAccountName" `
            -ResourceGroupName "<YourResourceGroupName>" `
            -Name "<YourChildRunbookName>"

        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

I am using the “Az” module here. Although Azure Automation accounts still have the AzureRM modules installed by default you should install the Az modules instead. They meanwhile offer feature parity and AzureRM is outdated and deprecated for 2024.

Child runbook

The child runbook now takes the parameters and performs the operations on the given site collections. This is nothing special and would have been implemented the same way in a single job. Here is a simple example evaluating the Email addresses of the site collection administrators:

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@()
)

$azureSPOcreds = Get-AutomationPSCredential -Name '<YourCredentialResourceName>'
$clientID = $azureSPOcreds.UserName
$secPassword = $azureSPOcreds.Password

$cert = Get-AutomationCertificate -Name '<YourCertificateResourceName>'
$Password = $azureSPOcreds.GetNetworkCredential().Password
$pfxCert = $cert.Export(3 ,$Password) # 3=Pfx
$global:CertPath = Join-Path "C:\Users" "SPSiteModification.pfx"
Set-Content -Value $pfxCert -Path $global:CertPath -Force -Encoding Byte | Write-Verbose

[xml]$global:tenantConfig = Get-AutomationVariable -Name 'TenantCOnfig'

foreach($siteUrl in $siteUrls)
{
    Connect-PnPOnline   -CertificatePath $global:CertPath `
                        -CertificatePassword $global:SecPassword `
                        -Tenant $global:tenantConfig.Settings.Azure.AADDomain `
                        -ClientId $global:ClientID `
                        -Url $siteUrl

    $web = Get-PnPWeb
    Write-Output "Site has title $($web.Title)"
    $scaColl=Get-PnPSiteCollectionAdmin
    $strEmails="$($web.Title) # $siteUrl = "
    foreach($sca in $scaColl) 
    {
        $strEmails += "$($sca.Email) "
    }
    Write-Output $strEmails
}

In the beginning the parameters for modern SharePoint authentication are grabbed. Inside the loop they are used each time for conncting to the corresponding site url. Afterwerds the PnP PowerShell cmdlets to retrieve (or manipulate) resources of the given site can be executed.

Logging

Once it comes to log (and later control) the results or some errors/events or if part of the job would be to report something there is a new challenge. As the whole result is produced by lots of separate jobs in parallel a new technique needs to be found as each job has an individual log. And do you want to check 500 logs for any results/events/inconsistencies?

So why not write to one central result? And the simplest result would be a blob file in Azure Storage. But how to overcome concurrency issues with all these jobs running in parallel?

Append block is the solution for this. And although there is no direct PowerShell cmdlet available, yet, the implementation using the available rest endpoint is quite simple. But you need to implement two steps:

  1. Create the blob file
  2. Write to it in “append” mode from each child runbook

Create the blob file

In the parent runbook the blob file needs to be created once (assuming you want to have one file per parent job execution):

# Connect to Azure with Run As account data
$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
        -ServicePrincipal `
        -Tenant $servicePrincipalConnection.TenantId `
        -ApplicationId $servicePrincipalConnection.ApplicationId `
        -Subscription $servicePrincipalConnection.SubscriptionId `
        -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint

#Create output file
$context = New-AzStorageContext -StorageAccountName "<YourStorageAccountName>" -UseConnectedAccount
$date = Get-Date 
$blobName = "SiteAdmins$($date.Ticks).txt"
$filePath = "C:\Users\$blobName"
Set-Content -Value "" -Path $filePath -Force
Set-AzStorageBlobContent -File $filePath -Container "<YourStorageContainer>" -BlobType Append -Context $context

The Az connection you already know from above as it’s also essential to be able to start the child runbooks. But it’s also needed for blob creation in the storage account. As the context is created by “-UseConnectedAccount”.
Then a filename based on the current timestamp is created. A file with that name is created empty and locally and finally uploaded as a blob file where the BlobType “Append” is important here for the further handling.

Finally the $blobName needs to handed over to the child runbook as well. Therefore the runbook parameters are extended:

    # Start child runbook Job with batched Site URL's
    $params = @{"siteUrls"=$SiteUrls;"logFile"=$blobName}

    Start-AzAutomationRunbook `
        -Parameters $params `
        -AutomationAccountName "<YourAutomationAccountName" `
        -ResourceGroupName "<YourResourceGroupName>" `
        -Name "<YourChildRunbookName>"

Write to the blob by “append block blob”

In the child runbook there is the new parameter for the logfile. After the iteration over the site collections and the creation of the result string $strEmails it can be written to the blob.

This happens via Rest Api as there is no PoweShell cmdlet, yet, for append block blob capability. For the Rest Api a bearer token is necessary that can be retrieved from our Azure connection already used.

param 
(
    [Parameter(Mandatory=$true)]
    [string[]]$siteUrls=@(),
    [Parameter(Mandatory=$true)]
    [string]$logFile
)
...

$servicePrincipalConnection = Get-AutomationConnection -Name 'AzureRunAsConnection'

Connect-AzAccount `
            -ServicePrincipal `
            -Tenant $servicePrincipalConnection.TenantId `
            -ApplicationId $servicePrincipalConnection.ApplicationId `
            -Subscription $servicePrincipalConnection.SubscriptionId `
            -CertificateThumbprint $servicePrincipalConnection.CertificateThumbprint
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"

foreach($siteUrl in $siteUrls)
{
    ....
    $strEmails="`n$($web.Title) # $siteUrl = "
    ....
    $date = [System.DateTime]::UtcNow.ToString("R")
    $header = @{
        "x-ms-date" = $date
        "x-ms-version" = "2019-12-12"
        ContentLength = $strEmails.Length
        "Authorization" = "Bearer $($resp.Token)"
    }

    $requestUrl = "https://mmsharepoint.blob.core.windows.net/output/$logFile" + "?comp=appendblock"
    Invoke-RestMethod -Uri $requestUrl -Body $strEmails -Method Put -Headers $header -UseBasicParsing
}

Based on the current Azure connection an access token for the resource "https://storage.azure.com/" is retrieved. Inside the loop the result of our SharePoint operations (see above, here omitted) can be appended. The $header for this operation is important. Beside the bearer token (quite the same usage then typically known from Microsoft Graph Rest calls for instance) the exact content length needs to be provided. Furthermore the current date needs to be provided and this may not be older than 15 mins once handled on the server side. This is the reason why the header is recreated each time the loop is iterated. Just to be on the safe side.

Having the header, the body is the simple text constructed above. Only one small change is made here: Pay attention on the carriage return “`n” at the very beginning of the string creation to have one result line after the other in the blob file.

Managed Identity

Since spring 2021 managed identity is also available for Azure Automation accounts. As the time of writing this is still in preview (and has some limitations). Nevertheless here I already show the procedure how to authenticate against Azure resources such as your storage account on behalf of the automation account’s managed identity.

First of all the managed identity needs to be set for the automation account. That is as simple as always and described in several posts of this blog for similar resources such as Azure Functions, App Services e.g.

Azure Automation Account – Enable Managed Identity

Once this is done you need to assign a role based access (RBAC) to the resource to be consumed. Normally this can be achieved easily over the Azure Portal. But as Azure Automation Managed Identity is still in preview you can only achieve this via code. Here is a little PowerShell script for that:

$managedIdentity = "182212a9-a487-42fb-9f21-31f7512c2053" # Object ID of the ManagedIdentity
$subscriptionID = "b2963255-e565-4a1b-ae83-81d48de20d73"
$resourceGroupName = "Default"
$storageAccountName = "myStorage"
New-AzRoleAssignment `
    -ObjectId $managedIdentity `
    -Scope "/subscriptions/$subscriptionID/resourceGroups/$resourceGroupName/providers/Microsoft.Storage/storageAccounts/$storageAccountName/" `
    -RoleDefinitionName "Storage Blob Data Contributor"

Once that role assignment is there the authentication inside the runbook is quite simple and the rest stays exactly the same:

Connect-AzAccount -Identity # Connect is quite simple now

# .... the rest stays the same, run cmdlets or get an access token
$resp = Get-AzAccessToken -Resource "https://storage.azure.com/"
Write-Output $resp.Token

Scheduling

Another option to equalize the execution of the child runbooks is job scheduling. Maybe you fear “Throttling” in SharePoint or Microsoft Graph in case you execute too many operations at the same time in parallel and maybe there is no need to get the whole result of your operation in a very short amount of time?

In that case you do not have to kick off each runbook immediately but you could also create a schedule for each so it will be started in the near future (maybe 10-120mins in advance?). For this you can create one-time schedules that automatically expire after being used once. But you need to take care for a later cleanup (or reuse them, New-AzAutomationSchedule will override an existing and the Register-AzAutomationScheduledRunbook would simply fail as the runbook was already registered). In PowerShell such a setup would look like this:

...
# Taken from above's start of the child runbook and slightly modified
for($counter=0; $counter -lt $Sites.Length; $counter++)
{
    $countSites++;
    $SiteUrls += $Sites[$counter].Url;

    if($countSites -eq $batchSize)
    {
        $countSites = 0;

        # Start child runbook Job with batched Site URL's
        $params = @{"siteUrls"=$SiteUrls}
        $resourceGroupName = "<YourResourceGroupName>"
        $automationAccount = "<YourAutomationAccountName"

        $currentDate = Get-Date
        $currentDate = $currentDate.AddMinutes(6+$counter)
        New-AzAutomationSchedule -AutomationAccountName $automationAccount 
                                                       -Name "tmpSchedule" + $counter
                                                       -StartTime $currentDate 
                                                       -OneTime 
                                                       -ResourceGroupName $resourceGroupName
        Register-AzAutomationScheduledRunbook `
		    -ScheduleName "Schedule_" + $counter `
		    -Parameters $params `
		    -RunbookName "<YourChildRunbookName>" `
		    -AutomationAccountName $automationAccount `
		    -ResourceGroupName $resourceGroupName
        # Empty SiteURLs array for next bunch 
        $SiteUrls = @();
    }
}

We used the foreach loop to start the child runbook from above but turned it into a for loop to have a counter as well. With that counter we create individual start times and schedules for each batch of sites. Once the OneTime schedule is created the child runbook can be registered to it and then it will be started based on that schedule.

I hope this explanation of all the various PS snippets help you to built your own sample/solution with Azure Automation. In case needed I might also share the two whole runbook scripts as samples in my github repo. Just leave a comment here in case this is desired and I’ll do my very best to deliver soon.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.