Starting Azure automation runbooks programmatically

Starting Azure automation runbooks programmatically

Azure automation runbooks are one of the chosen ways to provision Microsoft Teams or SharePoint sites inside Microsoft 365. A content of the runbooks normally favorites with PnP PowerShell and that’s well documented. But except from a scheduled start that pulls a request from a list how else can runbooks or (better!) corresponding jobs be started? (of course this way of starting is valid for any other runbook scenario)

In the past, there were a meanwhile deprecated Azure Automation SDK and a web hook scenario which could be used to start at runbook on demand. Now there are the new Azure Resource management Automation SDK and the parallel Rest API and this post describes how to use them.

Content

The client

First a client needs to be established, especially to validate security options. Having that client further operations can be full filled. In the following sample code an InteractiveBrowserCredential is used inside the DefaultAzureCredential. But as an alterative the EnvironmentCredential is prepared.

// For Dev and local env
Environment.SetEnvironmentVariable("AZURE_TENANT_ID", config["AZURE_TENANT_ID"]);
Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", config["AZURE_CLIENT_ID"]);
Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", config["AZURE_CLIENT_SECRET"]);
ArmClient client = new ArmClient(new DefaultAzureCredential(true)); // Enable interactive as well
            

Under security it is mentioned which roles or permissions are needed. Of course this also differs from the desired operations.

Create

An automation job will be created under the jobs of an existing automation account. So the access to this one has to be established first.

var automationAcc = client.GetAutomationAccountResource(new ResourceIdentifier(config["automationAccount"]));
var automationJobs = automationAcc.GetAutomationJobs();

After that, the job needs some parameters. First there is the name of the runbook, potentially some custom parameters needed by the runbook and finally the RunOn parameter.

var jobParameters = new Azure.ResourceManager.Automation.Models.AutomationJobCreateOrUpdateContent()
{
    RunbookName = config["runbookName"],
    RunOn = ""
};
// Using hard-coded parameters here
string alias = "TeamAlias";
jobParameters.Parameters.Add("displayName", "Team Name");
jobParameters.Parameters.Add("alias", alias);
jobParameters.Parameters.Add("teamDescription", "Team Description");
jobParameters.Parameters.Add("teamOwner", config["teamOwner"]);
var automationJob = automationJobs.CreateOrUpdate(Azure.WaitUntil.Started, $"Creation of {alias}", jobParameters);

Having that the job can be started. It must be clear that this is only the start with no guarantee on the result. So another loop where to detect if the job completed successfully makes sense.

Check for completion

int count = 0;
while (count < 10)
{
    var newAutomationJob = automationAcc.GetAutomationJob($"Creation of {alias}");
    if (newAutomationJob.Value.Data.Status == AutomationJobStatus.Completed)
    {
        Console.WriteLine($"Job Ended {automationJob.Value.Id}");
        break;                       
    }
    if (newAutomationJob.Value.Data.Status == AutomationJobStatus.Failed || newAutomationJob.Value.Data.Status == AutomationJobStatus.Stopped)
    {
        Console.WriteLine($"Job Ended unsuccesful {automationJob.Value.Id}");
        break;
    }
    count++;
    Thread.Sleep(30000);
}

The counter and sleep timer are assumed values here. But the pattern is pretty much clear. In a loop the job is checked for completeness or failed state and the output is given.

Security considerations

How does the whole operation work from a security perspective? The caller can be a user (interactively) or an app (unattended): DefaultAzureCredential works best with it inside the code.

What is mainly necessary is the Role Assignment (or a subset of permissions): Automation Job Operator for any kind of user or app registration trying to fulfill this kind of service operation.

Role Assignments given to an app or a user to being able to start a runbook job
Role Assignments given to an app or a user to being able to start a runbook job

In case you want to continue inside the runbook job on behalf of the starting user there is the problem that the caller is not identifiable inside the runbook (token) except explicitly transported through parameters.

When using the rest API you must understand that it cannot be done directly from a client context because Azure Resource Management Rest API does not support CORS. So coming from a client context such as a SharePoint framework (SPFx) e.g. you first need to call a back end process and there you have the choice to use .Net or Rest.

Rest Client

Alternatively, to the .Net SDK there is also the possibility to perform the operations with the Rest API. First a Http Client needs to be established with a Bearer Token based on a simple Entra ID app registration. For Azure there is no granular permission model in Entra ID (see scope user_impersonation below). This is handled inside the resource model so here only a simple permission is set and later the app is given permissions role based (RBAC) as seen under security.

Simple Entra ID app registration permission for Azure Resource Management
Simple Entra ID app registration permission for Azure Resource Management
var tokenCredential = new DefaultAzureCredential(true);
var accessToken = tokenCredential.GetToken(new Azure.Core.TokenRequestContext(["https://management.azure.com/user_impersonation"])).Token;
HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", accessToken);

Having a client and a token here the request can directly start, assuming a subscription, resource group and account name are given:

Create (Rest)

string groupAlias = "TeamAlias";
JobStartRequest request = new JobStartRequest()
{
    properties = new JobProperties()
    {
        runbook = new RunbookProperties()
        {
            name = config["runbookName"]
        },
        parameters = new JobParameters()
        {
            groupAlias = groupAlias,
            displayName = "Team Name",
            teamDescription = "Team Description",
            teamOwner = config["teamOwner"]
        },
        runOn = ""
    }                
};
var jsonRequest = JsonSerializer.Serialize(request);
string jobStartUrl = $"https://management.azure.com/subscriptions/{config["subscriptionID"]}/resourceGroups/{config["resourceGroupID"]}/providers/Microsoft.Automation/automationAccounts/{config["automationAccount"]}/jobs/Creation{groupAlias}?api-version=2023-11-01";
var jobStartReq = new HttpRequestMessage(HttpMethod.Put, jobStartUrl);
jobStartReq.Content = new StringContent(jsonRequest, Encoding.UTF8);
jobStartReq.Content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
var jobStartResult = await client.SendAsync(jobStartReq);
jobStartResult.EnsureSuccessStatusCode();
var jobStartContent = await jobStartResult.Content.ReadAsStringAsync();
var createdAutomationJob = JsonSerializer.Deserialize<AutomationJob>(jobStartContent);
string jobName = createdAutomationJob.name;

Although the client above was easier to initialize here it looks like a bit more code. But take into account that now using rest you are responsible for deserializing the results. For deserializing I used reduced classes orientating on the Rest documentation.

Check for completion Rest

Also with Rest a check for completion of the job can be done of course. The the pattern is the same as above. Only the calls are slightly different.

string jobName = createdAutomationJob.name;
int count = 0;
while (count < 10)
{
    string jobCheckUrl = $"https://management.azure.com/subscriptions/{config["subscriptionID"]}/resourceGroups/{config["resourceGroupID"]}/providers/Microsoft.Automation/automationAccounts/{config["automationAccount"]}/jobs/{jobName}?api-version=2023-11-01";
    var checkReq = new HttpRequestMessage(HttpMethod.Get, jobStartUrl);
    var jobCheckResult = await client.SendAsync(checkReq);
    jobCheckResult.EnsureSuccessStatusCode();
    var ceckContent = await jobCheckResult.Content.ReadAsStringAsync();
    var newAutomationJob = JsonSerializer.Deserialize<AutomationJob>(ceckContent);
    if (newAutomationJob.properties.status == "Completed")
    {
        Console.WriteLine($"Job Ended {newAutomationJob.properties.jobId}");
        break;
    }
    if (newAutomationJob.properties.status == "Failed" ||
      newAutomationJob.properties.status == "Stopped")
    {
        Console.WriteLine($"Job Ended unsuccesfully {newAutomationJob.properties.jobId}");
        break;
    }
    count++;
    Thread.Sleep(30000);
}
Console.ReadLine();

Using the job name from the original start it now gets requested again. The job object is then checked for its status and if it’s not yet done in any case the loop continues.

As usual there is also a full sample code repository on GitHub illustrating the shown things in .Net SDK as well as in Rest API.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
New granular permission model in SharePoint

New granular permission model in SharePoint

Recently Microsoft unveiled the new granular permission model in SharePoint targeting the access of Microsoft Graph towards resources such as lists, libraries, folders and items.

This post shows how to quickly set it up and a walk through how it works.

Content

Setup

For the setup first two application registrations are needed. One is the ‘administrative’ one that will later assign permissions. The second one is used to access the resources in case permissions are granted.

Assuming you are familiar setting up application registration in general here only the api permissions will be considered:

  1. On application registrations 1 delegated Sites.FullControl.All or Sites.Selected + Owner is needed
  2. On application registrations 2 start with Lists.SelectedOperations.Selected
    This will handle access to the whole list
  3. As the application registration permission only works in combination with the resource applied to, the next permission can already be applied, too, ListItems.SelectedOperations.Selected

Two things to note here: First everything that comes now will be in delegated mode. Second as known from my previous Resource specific consent (RSC) usages no access is given so far. This will be done now in the following two steps for lists and list items.

The new granular *.Selected permissions
The new granular *.Selected permissions

Demo List permissions

Assuming the right *.Selected permissions given to an application registration it simply has to be assigned to the permissions endpoint of the given resource here a list:

POST https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/permissions/

The request body simply looks like this;

{
    "roles": [
        "write"
    ],
    "grantedTo": {
        "application": {
            "id": "16b9ff16-2dfd-4953-81e7-c5e5a63376e2"
        }
    }
}

After the permission is granted, a test shows that the list but also all items can be accessed:

GET https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/
GET https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/items/

For the negative test and to move to the next section the granted permission needs to be revoked. This will be done with a simple delete request on the permission ID which sightly looks like this:

DELETE https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/permissions/aTowaS50fG1zLnNwLmV4dHw2ZjA5MGM0YS1lNjg3LTRhYjktOWJjZi1hZTQwYTk0OWEwMzlANWJmM2VlNWItOTVkZC00ZDJjLTkxYWMtMWVmOWE2MDY2ODA1

After the permission is revoked again any access to the list or its items are answered by a 404.

Demo List Item permissions

The next thing to do is grant permissions on specific list items for example those with id 1 and 3:

POST https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/items/1/permissions
POST https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/items/3/permissions

While the endpoint is naturally slightly longer than above the request body totally stays the same.

After the permissions are granted, a test shows that the list but also the granted/selected items only can be accessed:

GET https://graph.microsoft.com/beta/sites/bca9b232-752e-4710-ba18-533e63a00d25,cfb643a4-8f68-4b8e-925e-c8f2c3544d4d/lists/48f446fb-0d60-48a7-b9d2-f4ddeafa588d/items/

Several things to add after this walk-through. As seen in the URL endpoints this technique is still in beta. It also only works with the usage of the Graph API (no SP Rest, CSOM e.g) of course. Furthermore, there is another 3rd permission not demonstrated here: Files.SelectedOperations.Selected. While the shown ListItems.SelectedOperations.Selected operates on sole list items but also its corresponding documents if applicable, Files.SelectedOperations.Selected only works on documents. Finally, as well known, folders can be be treated as list items, too.

This was the theoretical walk through to the process of the new granular permission model in SharePoint. For your own reference I created a small GitHub repository where you can test your own scenario based on .net or PowerShell.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Calling Microsoft Graph in SPFx the secure way

Calling Microsoft Graph in SPFx the secure way

As mentioned in many, many posts Microsoft Graph but also others 3rd party APIs become more and more essential to be called from SharePoint Framework. In this post I want to highlight the potential most secure way this should be done and I will also argue why after a step-by-step description how to set it up client and server side.

As most of the parts and especially the Magic happens server-side the post will start with that part. The consuming client part will take place at the end of this post.

Content

Setup Consuming App registration

A basic app registration needs to be created and the typical values like clientId, tenantId and clientSecret are stored. No need to mention that in production scenarios or even in shared developer scenarios they should not be shared in the application settings like done in this simple sample.

To access Microsoft Graph from inside the AzureFunction 2 more essential things are needed. These are given delegated Graph permissions and a valid “Application ID URI” used as Audience to be verified.

At first authentication needs to be done. Therefore, a bearer token should be in place at the HTTP request’s header. This must be validated. How to do that is nearly taken from this Microsoft Graph sample.

The token itself is generated client-side from SPFx (see below).

Basically, it just needs to be a bearer token. But in the sample it is validated against a lot of attributes such as clientID, a tenantID SigningKey, ValidIssuer, ValidAudience e.g. Also a check against extended token attributes such as Entra ID group membership might make sense.

public async Task<string>ValidateAuthorizationHeaderAsync(Microsoft.Azure.Functions.Worker.Http.HttpRequestData request)
{
  if (request.Headers.TryGetValues("authorization", out IEnumerable<string>? authValues))
  {
    var authHeader = AuthenticationHeaderValue.Parse(authValues.ToArray().First());
    if (authHeader != null && string.Compare(authHeader.Scheme, "bearer", true, CultureInfo.InvariantCulture) == 0 && !string.IsNullOrEmpty(authHeader.Parameter))
    {
      var validationParameters = await GetTokenValidationParametersAsync();
      ...
private async Task<TokenValidationParameters?> GetTokenValidationParametersAsync()
    {
      if (_validationParameters == null)
      {
        var tenantId = _config["tenantId"];
        var clientId = _config["clientId"];
        var domain = _config["domain"];
        var configManager = new ConfigurationManager<OpenIdConnectConfiguration>($"https://login.microsoftonline.com/{tenantId}/.well-known/openid-configuration", new OpenIdConnectConfigurationRetriever());
        var config = await configManager.GetConfigurationAsync();
        _validationParameters = new TokenValidationParameters
        {
          IssuerSigningKeys = config.SigningKeys,
          ValidateAudience = true,
          ValidAudience = $"api://{domain}/{clientId}",
          ValidateIssuer = true,
          ValidIssuer = config.Issuer,
          ValidateLifetime = true
        };
      }
      return _validationParameters;
    }

Next step is the authentication against Microsoft Graph. Theoretically, this can be done in two ways.

We need some values such as a clientID, a clientSecret or a tenantID. While those should be stored in a secure content storage. At least secrets should be stored in an Azure Key Vault or similar. Other IDs could be stored in an Azure App Configuration Service.

On-behalf-flow

The on-behalf-flow takes the bearer token from the last step and creates a new access token based on a client ID and secret to access Microsoft Graph on behalf of the current user.

public GraphServiceClient? GetUserGraphClient(string userAssertion)
{
  var tenantId = _config["tenantId"];
  var clientId = _config["clientId"];
  var clientSecret = _config["clientSecret"];
  var scopes = new[] { "https://graph.microsoft.com/.default" };
  var onBehalfOfCredential = new OnBehalfOfCredential(tenantId, clientId, clientSecret, userAssertion);
  return new GraphServiceClient(onBehalfOfCredential, scopes);
}

The bearer token from the last step here is also called the userAssertion. It is the User Representation compared to the app and tenant representation.

Client-credentials-flow

This is the elevated app permissions variant. Theoretically, there’s no need to authenticate before at the Function level of course. But as this is the more risky one, any focus should be taken on this.

Nevertheless, as this blog post is called “the secure way” it will concentrate on the user scope which should always be preferred.

Graph Operation

As simple user operation with overwrite the description of the given site. At first the the right GraphServiceClient needs to be established (see above). The rest is pretty straightforward.

public async Task<bool> UpdateSiteDescreption(string userAssertion, string siteUrl, string newSiteDescreption)
{
  _appGraphClient = GetUserGraphClient(userAssertion);
  Uri uri = new Uri(siteUrl);
  string domain = uri.Host;
  var path = uri.LocalPath;
  var site = await _appGraphClient.Sites[$"{domain}:{path}"].GetAsync();
  var newSite = new Site
  {
    Description = newSiteDescreption
  };
  try
  {
    await _appGraphClient.Sites[site.Id].PatchAsync(newSite);
  }
  catch (Microsoft.Graph.Models.ODataErrors.ODataError ex)
  {
    _logger.LogError(ex.Message);

First the given site is requested. Then a new SiteBuilder is created with the fields to be updated and finally it’s patched against the given site.

Although this seems a less sensitive operation it’s a good example for „security“ as even for such a harmless operation Sites.FullControl.All permission is required. And it’s much better to limit it to one app / function instead of providing it to all (including future) SPFx web parts.

User permissions vs delegated Sites.Selected scope

In my last post, I explained Sites.Selected dedicated scope. But the simpler way might be to use user permissions combined with sites access. So what’s the difference here?

It’s that simple: One additional step!

In the simpler scenario the Function can do what the user is allowed to and the app is allowed to. With the delegated scope the user can do what he is allowed to and the app together with permissions given on any dedicated site.

Azure Function considerations

Set authentication

Authentication for Azure Functions should be set up on the resource base as shown in the following picture:

Authentication for Azure Function, choose provider
Authentication for Azure Function

As shown we can use the already established application registration for this.

CORS

To support CORS (Cross-origin resource sharing) two levels need to be considered. At first a local debug scenario and second the official client-server connection. Locally this needs to be configured in the app settings while client server it will be configured in the resource settings.

{
  "Values": {
     ...
  },
  "Host": {
    "CORS": "*"
  }
}

This is how it looks like locally while on the Azure resource it looks like this

CORS setting in the Azure resource

Consuming web part

The consuming web part is a simple one providing a given url and the new description. It transports the values server-side while authenticating only user_impersonation together with AadHttpClient and inside the AzureFunction it’s decided who is able to do what/more. This is what I call “the secure way“.

So only a quick look on the web part. Of course there is a need for a ui providing inputs for URL and new description. Additionally, there must be a button to execute the function and that’s it. Behind the scenes the function looks quite like this.

The web part to update the site description
The web part

The most interesting part potentially takes place after the button is pressed. Because the authentication against the backend Azure Function needs to take place and the content needs to be transported.

export default class FunctionService {
  private aadHttpClientFactory: AadHttpClientFactory;
  private client: AadHttpClient;
  public static readonly serviceKey: ServiceKey<FunctionService> =
    ServiceKey.create<FunctionService>('react-site-secure-function-call-smpl', FunctionService);
  constructor(serviceScope: ServiceScope) {  
    serviceScope.whenFinished(async () => {
      this.aadHttpClientFactory = serviceScope.consume(AadHttpClientFactory.serviceKey);      
    });
  }
  public async setNewSiteDescreption(siteUrl: string, siteDescreption: string): Promise<any[]> {
    this.client = await this.aadHttpClientFactory.getClient('api://xxx.azurewebsites.net/0a8dfbc9-0423-495b-a1e6-1055f0ca69c2');
    const requestUrl = `http://localhost:7241/api/SiteFunction?URL=${siteUrl}&Descreption=${siteDescreption}`;
    return this.client
      .get(requestUrl, AadHttpClient.configurations.v1)   
      .then((response: HttpClientResponse) => {
        return response.json();
      });
  }
}

Based on the aadHttpClientFactory and the ServiceKey pattern in the update method a client is established using the audience (see above) and finally a get request (in this case against a local host debug) is executed.

Summary

So what are the big advantages of this way? The first thing of course is, this way does not allow any uncontrolled things from any client/web part to serve side. No Microsoft Graph access is directly given to a public enterprise application. Only an access to an Azure function is provided. And this Azure function (incl. access to it!) is controlled by the developer. An exclusive application registration for the Azure Function controls and restricts this independently from other solutions.

Diagram: Usage of MsGraphClient vs AadHttpClient
MsGraphClient vs AadHttpClient

For further reference on the whole solution, please refer to the corresponding GitHub repository.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
Using SharePoint Framework (SPFx) to assign delegated scope permissions to a site

Using SharePoint Framework (SPFx) to assign delegated scope permissions to a site

A fellow MVP colleague recently published a blog post explaining the new resource specific consent with delegated scope. One benefit here is that there is no need for tenant admin similar application permission (app scope Sites.FullControl.All) to apply resource specific permission. Only site collection administration permissions are needed anymore. Martin describes, how to do that with script code. But what about enterprise scenarios were there is no script available? Even not for administrators, have seen this a lot a in the past.

This post will describe a solution (code)based on SharePoint framework (SPFx) to do the same thing. It will also describe the pros and cons of each approach.

Content

Configure Application

The application registration should be configured in the web part properties. As there might be many application registrations, it might make sense to filter them on the pre-fix which is done in the sample with “dlg”.

Web part configuration incl App registration and use admin mode
Web part configuration

Select site or use current one

Another option the web part can be configured on is either use the current site or in admin mode search for another site using the Graph search api endpoint.

If the web part is configured in admin mode an additional search field is shown where the user can enter text and then search for matching sites. From that match result the user can pick one which is then used as the actual one to deal with.

Detect Site Collection Administrator access

Select a site from a search result to apply permissions to
Select a site from a search result

As said above, to apply permissions the acting user needs site collection administrator access. To evaluate this, the hidden User Information List can be taken. Inside, a field isSiteAdmin=true exists.

public async isSiteAdmin(userEMail: string, currentSiteId: string): Promise<boolean> {
    this.client = await this.msGraphClientFactory.getClient('3');
    const response = await this.client
            .api(`sites/${currentSiteId}/lists/User Information List/items`)
            .version('v1.0')
            .header('Prefer','HonorNonIndexedQueriesWarningMayFailRandomly')
            .expand('fields($select=EMail,IsSiteAdmin)')
            .filter(`fields/EMail eq '${userEMail}'`)
            .get();
    return response.value[0].fields.IsSiteAdmin;
 }

Every time the site in terms of ID is changed this evaluation method needs to be called again. The result will also be shown in the “Apply Permissions” button, which will only be enabled when the result is true (and in web part properties an application registration is picked btw).

The’ function to detect the site admin capability is included in the GraphService and evaluated from the site’s hidden “User Information List“. Notice needs to be taken on the custom fields which need to be expanded. A filter can only be set if a column is indexed which is not recommended for a system list like “User Information List” or in the Header it’s included Prefer: HonorNonIndexedQueriesWarningMayFailRandomly

Apply permissions

Apply permissions can be called once a site is selected or use current one is set and the user has permission to do so. The function to be called is inside the GraphService.

public async grantPermissions(role: string, appId: string, displayName: string, siteId: string): Promise<any[]> {
    this.client = await this.msGraphClientFactory.getClient('3');
    const requestBody = {
      roles: [
        role
      ],
      grantedToIdentities: [
        {
          application: {
            id: appId,
            displayName: displayName
          }
        }
      ]
    };
    const response = await this.client
            .api(`/sites/${siteId}/permissions`)
            .version('v1.0')    
            .post(requestBody);
    return response;
}

The function is pretty straightforward. From the arguments it constructs a body and that body is posted towards the site permissions.

“Consume” the access

As Martin already mentioned in his blog there are not much simple script samples to show the access functionality. This is because only Microsoft Graph access is enabled. CSOM for instance does not work. A simple sample you can see in Martin s blog and another one I will soon add to this repository based on an Azure Function accessing a site on behalf of Microsoft Graph.

Pros & Cons

While this web part seems to be a comfortable thing, this simplified version has a major disadvantage, that is the request for Sites.FullControl.All. This relatively high privileged permission is valid then for all web parts in the same tenant using MsGraphClient. I clearly recommend this only to be treated as a sample, especially when using such a solution in enterprise scenarios. A better solution would be to use it in an AzureFunction in the backend. For sure a bit more effort but worth it.

So why do you come up with this half valid sample you may ask? Well it’s here to explain most valid things and to not make it too complicated. I will come up with another version combining an SPFx web part with an AzureFunction soon. This might also include an Azure Function sample consuming the “access”.

Stay tuned but meanwhile for the whole code reference here is my GitHub repository.

Meanwhile there is a GitHub repository alternative using backend Azure Function.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.
An action-based Teams Messaging Extension with Teams Toolkit for VSCode

An action-based Teams Messaging Extension with Teams Toolkit for VSCode

In my last post an action based teams messaging extension created by Teams Toolkit for Visual studio based on C# and Blazor was described. This time a pendant, using Typescript, created by Teams Toolkit for VSCode code shall be described.

Once more the challenge is to combine the backend bot framework capability with UI components, responsible for the action based task modules. Teams Toolkit for VSCode regularly has a “How to” for that instead of providing a setup for it to build from scratch but even that is quite error prone and a linked sample is the better source. Also a 3rd component will enter stage but let’s come to that a bit later and describe it here step by step.

Content

Setup

Combining two capabilities, of course needs the decision with which one to start. As the messaging extension is the much bigger one it’s a good decision to start with that. The UI components can then be taken from another solution and put into it. Nevertheless, it turned out that both ways provided many challenges, that were starting with the tab or starting with the messaging extension while adding the other one on top.

What is not really mentioned in the How-to’s but essential for the different compilations of backend and front end is the split up of package.json and tsconfig.ts from one source into two targets (/bot and /ui). One package.json is placed in the root while each target (/bot and /ui) has another one combined with a tsconfig. (A 3rd capability, that is the /tab/api backend is not even mentioned)

package.jsonbot\package.jsonbot\tsconfig.json
ui\package.jsonui\tsconfig.json
ui\api\…

Also totally neglected is the debug capability, especially when it comes to two backend solutions (bot and azure function, (refer later in this post or in the linked repository))

Initial Task Module

Based on the fetch=true setting in the teams manifest directly on executing the messaging extension, the bot framework is reached inside the method handleTeamsMessagingExtensionFetchTask.

This method is simply responsible for returning a task module including a url taken from the ui part of this solution.

Add UI Capability

After the UI components are set up the messaging extension needs to know from where to load the initial one (will be done from the bot framework, see above). Therefore, a change in the compose extension needs to be done which points to the right endpoint. And although in this sample only one task module is needed a routing functionality will be left so it can be reactivated at any point of time in case a second task module is needed, too.

But the UI itself is not very complicated. It consists of several FluentUI 9 components (List (Preview!), RadioGroup, Button) and establishes loading and filtering product data.

Task Module to select a product
Task Module to select a product

The magic happens once an item is invoked either by double click or by select and push the button. In that case a dialog.url.submit is called which transfers the selected product to the bot framework where it “arrives” in handleTeamsMessagingExtensionSubmitAction. But also take a note what happens in the list on the change handler. If there is no real change then double click is the result and the same action like on button click is executed.

onSelectionChange={(_: any, data: any) => {
  setSelectedItems(data.selectedItems);
  if (data.selectedItems[0] === selectedItems[0]) {
    // "Double click!": Excecute Button click
    btnClicked();
  }
const btnClicked  = React.useCallback(() => {
  dialog.url.submit({product: selectedProduct});
}, [selectedProduct]);
Order Card result with weekday order option
Order Card result with weekday order option

Retrieve Data

Already for the UI selection form products need to be retrieved from the backend database. This will be done by a dedicated client.

const getTableClient = (): TableClient  => {
const accountName: string = process.env.AZURE_TABLE_ACCOUNTNAME!;
const storageAccountKey: string = process.env.AZURE_TABLE_KEY!;
const storageUrl = `https://${accountName}.table.core.windows.net/`;
const tableClient = new TableClient(storageUrl, "Products2", new AzureNamedKeyCredential(accountName, storageAccountKey));
return tableClient;
export async function getOrders(category: string) {
  const tableClient = getTableClient();
  const products: IProduct[] = [];
  let productEntities: any;
  if (category === '' || category === 'All') {
    productEntities = await tableClient.listEntities<IProduct>();
  }
  else {
    productEntities = await tableClient.listEntities<IProduct>({
      queryOptions: { filter: odata`Category eq ${category}` }
    });
  }
  let i = 1;
  for await (const p of productEntities) {
    const product = {
      Id: p.partitionKey,
      Name: p.rowKey,
      Orders: p.Orders as number,
      Category: p.Category as string
    }
    products.push(product);      
    i++;
  }
  return products;

But where to place this?

Bot

API Backend

Tab Frontend

In fact, now, there are three capabilities, which also offer three different entry points, especially when called from local debug.

The fact that a basic Teams tap application now already holds the capability of a backend api is totally ignored so far. As it’s already placed inside the tab capability it is better to leave it here than let it collide with the bot. (Especially in local debug environment with mixing bot tunnel and localhost url you will get struggling so I decided to leave duplicate code here). That means when copying the \src from a fresh tap solution the \api folder shall be included and duplicate \azService is established.

For a local debug or running in azure it is necessary to put the following variables pointing to the table inside the configuration (teamsapp.local.yml for local configuration)

AZURE_TABLE_ACCOUNTNAME + AZURE_TABLE_ACCOUNTNAME

On the other hand it later becomes clear that the update process clearly belongs to the bot which will produce some duplicate code. And also AZURE_TABLE_ACCOUNTNAME + AZURE_TABLE_ACCOUNTNAME need to be placed there as well.

Update Data

Similar to retrieving the data, a single product by ordering can be updated. The only difference is that the update is initiated from the bot framework (retrieving actions from adaptive cards). This makes the injection slightly different as mentioned above. The rest stays the same. A service is taken and it first establishes a client and then executes the needed function.

export async function updateOrders(data: Record<string, unknown>) {
  const tableClient = getTableClient();
  const prodId = new String(data.Id ?? "");
  const prodName = new String(data.Name ?? "");
  const prodOrders: Number = new Number(data.Orders ?? 0);
  const prodAddOrders1: Number = new Number(data.orderId ?? 0);
  const prodAddOrders2: Number = new Number(data.orderId2 ?? 0);
  const prodAddOrders3: Number = new Number(data.orderId3 ?? 0);
  const prodAddOrders4: Number = new Number(data.orderId4 ?? 0);
  const prodAddOrders5: Number = new Number(data.orderId5 ?? 0);
  const newProduductOders = prodOrders.valueOf() + 
                            prodAddOrders1.valueOf() + 
                            prodAddOrders2.valueOf() +
                            prodAddOrders3.valueOf() +
                            prodAddOrders4.valueOf() +
                            prodAddOrders5.valueOf();
  const tableEntity = 
  {
    partitionKey: prodId.toString(),
    rowKey: prodName.toString(),
    Orders: newProduductOders
  };
  await tableClient.upsertEntity(tableEntity);
  const returnProduct: IProduct = { Id: prodId.toString(), Name: prodName.toString(), Orders: newProduductOders, Category: ''};
  return returnProduct
}

What might be confusing here is the more type safe Record<string, unknown> construction. This can be handled fast with classes in that of primitive values.

The rest is no rocket science anymore. After summing up the order values, one tableEntity is created with the new order value and the Id as partitionKey and the Name as rowKey for identification. The tableEntity gets updated and returned to the bot framework where it finally is returned as a display adaptive card.

Display Order Card result
Display Order Card result

This was my first attempt to build a solution with teams toolkit for visual studio code. Although it was considered to be the (more, against Visual Studio C#) mature one and my solution was already in place “on the other side” for me it produced a lot of headaches. So it seems to be a long way to go before a bit more complex solutions consisting of several capabilities can be easily set up developed. On the other hand the capabilities taken on its own are pretty straightforward so forgive me that I did not explain too much of a tab or the simple part framework method here.

But if in doubt about the described functionality refer to my GitHub repository holding the whole solution’s code.

Markus is a SharePoint architect and technical consultant with focus on latest technology stack in Microsoft 365 Development. He loves SharePoint Framework but also has a passion for Microsoft Graph and Teams Development.
He works for Avanade as an expert for Microsoft 365 Dev and is based in Munich.
In 2021 he received his first Microsoft MVP award in M365 Development for his continuous community contributions.
Although if partially inspired by his daily work opinions are always personal.