Key terms for system performance

The following are key terms for systems performance.

  1. IOPS: Input/output operations per second is a measure of the rate of data transfer operations. 
  2. Throughput: The rate of work performed
  3. Response time: The time required for an operation to complete. This includes time spent by the request waiting for resources.
  4. Latency: The time spent by the request waiting. Sometimes it might be used as an equivalent of response time.
  5. Utilisation: Measure of how busy a resource is for a given interval when it was servicing requests.
  6. Saturation: The amount of queued work a resource has which it cannot service.
  7. Bottleneck: A resource that limits the system performance.
  8. Workload: Load on/Input to the system
  9. Cache: A system that can buffer a limited amount of data, usually faster than the underlying primary storage.
  10. Bandwidth: Maximum transfer rate of a channel

Get the latest file from directory with pattern

I have a task that generate HTML reports, for example:

Api-Test-Automation-2019-06-23-12-35-54-450-0.html
Api-Test-Automation-2019-06-23-12-38-44-701-0.html

I want to get the latest report and send it in the email as attachment.

This will actually attach all files:

$(Build.SourcesDirectory)\newman\htmlreport\*.html

But I just want to add only the latest created file.

Solution:

So you have 2 HTML reports and you want to send only the last report. you can achieve this goal with a PowerShell task that set a variable with the last file path (add a PowerShell task after the html generation):

cd $(Build.SourcesDirectory)\newman\htmlreports
$files = dir -Filter *.html
$latest = $files | Sort-Object LastAccessTime -Descending | Select-Object -First 1
$lastFile = $latest.FullName
Write-Host "##vso[task.setvariable variable=latestHtml]$lastFile"

Now in the send email task just put the variable $(latestHtml).

Does a restart of an Azure Container Instance update the image?

I’m running an Azure Container Instance of a rather large image (~13GB). When I create a new instance it takes around 20 minutes to pull the image from the Azure Registry. When I update the image and then restart the container it also says it’s pulling, but it only takes a few seconds. I tested it by changing the console output and it actually seems to update the image, but why is it taking so much less time?

Solution:

ACI creates containers without you having to care about the underlying infrastructure, however under the hood these containers are still running on hosts. The first time you start your container, unless you are very lucky, it is unlikely that the underlying host has your container image cached and so it has to download the image, which for a large image will take a while.

When you restart a running container, most of the time it will restart on the same host, and so already have the old image cached. To update to the new image it will only need to download the difference, which is quick.

Cortana is still calling a bot I've deleted

I was asked to make cortana invoke a local exe (WinForms).
So I created a bot using the SDK to execute the assembly but for technical reasons this didn’t work.

Now my problem is, I’ve already finished the requirement, by building a UWP app instead that calls the EXE and everything is fine.

But now Cortana sometimes calls the bot and sometimes calls the UWP app.
Even after I’ve deleted the bot (2days ago) and terminated the subscription at the time (different reason, costly) it is still able to call the bot.

I’ve removed my account from windows and cortana, and yet it is able to reach it – or atleast attempting to.

Is there anyway to sever the connection to the bot via cortana?
The bot is no longer available, so I can’t remove it from Cortana from its channels.

Solution:

Was the bot developed as a third party cortana skill? If so, the way to disconnect Cortana from the skill is to remove the cortana channel (button on the channel registration). If you simply remove the bot, there may be a delay in the time it takes to tear down the channel. However, you say it still invokes your bot. Did you delete all the resources for the bot? When you published the bot, I gather you only published to yourself. Can anyone else see the bot? If you feel something got stuck, send the bot id and invocation name to skillsup at microsoft dot com and ask them to manually remove the cortana channel. This would free up the invocation name if locked.

Can't Find Data Lake Store Gen2

I’m trying to locate Azure DataLake Store Gen2 using the Azure portal and for some reason cannot find it:

enter image description here

I’ve been searching the docs and the portal and cannot seem to find it, has anyone else run into this problem? It has been in global GA since Feb, so I don’t think that’s the issue. I’ve reviewed the docs for how to create a Storage Account, is that all that’s needed to create the gen2 instance?

Solution:

ADLS gen 2 is a feature of Azure Storage. When you are creating a Storage account, go to the Advanced tab:

choose Advanced tab

Then enable Hierarchical namespace (this provides you ADLS Gen 2):

ADLS hierarchical namespace

Kubernates network: my frontend cannot reach backend

I have the following docker-compose file that works finely:

version: '3'
services:
myfrontend: 
  image: myregistry.azurecr.io/im1:latest
  container_name: myfrontend
   ports:
  - 80:80
  - 443:443

 mybackend:
image: myregistry.azurecr.io/im2:latest
container_name: mybackend
expose: 
  - 8080

The backend only exposes 8080 to the internal network, the frontend has a modded nginx image with the following configuration (and it works as docker resolves the ip with the container name)

server {
listen 80 default_server;
location / {
    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;

    resolver 127.0.0.11 ipv6=off;

    set $springboot "http://mybackend:8080";
    proxy_pass $springboot;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}

I migrated the above configuration into kubernates and I get a 502 bad gateway error from nginx, I think because it cannot solve the backend address.

Here’s the kubernates conf, can you give it a look and tell me what am I doing wrong? 😦

apiVersion: apps/v1beta1
kind: Deployment
metadata:
 name: mybackend
spec:
 replicas: 1
 strategy:
   rollingUpdate:
    maxSurge: 1
    maxUnavailable: 1
  minReadySeconds: 5
 template:
   metadata:
  labels:
    app: mybackend
spec:
  nodeSelector:
    "beta.kubernetes.io/os": linux
  containers:
  - name: mybackend
    image: myregistry.azurecr.io/sgr-mybackend:latest
    ports:
    - containerPort: 8080
      name: mybackend
    resources:
      requests:
        cpu: 250m
        limits:
          cpu: 500m
---
apiVersion: v1
kind: Service
metadata:
  name: mybackend
spec:
  ports:
  - port: 8080
  selector:
    app: mybackend
 ---
 apiVersion: apps/v1beta1
 kind: Deployment
 metadata:
 name: myfrontend
 spec:
   replicas: 1 
 template:
 metadata:
  labels:
    app: myfrontend
 spec:
  nodeSelector:
    "beta.kubernetes.io/os": linux
  containers:
  - name: myfrontend
    image: myregistry.azurecr.io/myfrontend:latest
    ports:
    - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: myfrontend
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: myfrontend

Solution:

you need to set your resolver to this:

kube-dns.kube-system.svc.cluster.local

so the kube-dns name\address in your cluster, because nothing on localhost would resolve mybackend to its ip address. I’m not sure you need this at all, because container would know backend address from kubernetes anyway. I’d probably drop that setting

Generate RSA Key in Azure KeyVault using ARM Template

I want to create RSA Key in Azure Key Vault using ARM template

All what I found is a REST API to do it https://docs.microsoft.com/en-us/rest/api/keyvault/createkey/createkey any ideas if this applicable through ARM template

Solution:

no, its not, unfortunately. Only “things” exposed to ARM are keyvault resources, secrets and accessPolicies:

https://docs.microsoft.com/en-us/azure/templates/microsoft.keyvault/allversions

Setting "Allow all pipelines" when creating a service endpoint through DevOps API Create Endpoint

I am attempting to create a service endpoint through the Azure DevOps Rest API but cannot set the “Allow all pipelines to use this service connection” option. I cannot find documentation on the json structure to accomplish this.

https://docs.microsoft.com/en-us/rest/api/azure/devops/serviceendpoint/endpoints/create?view=azure-devops-rest-5.0#endpointauthorization

Current snippet for creating the connection:


$baseUri = "https://dev.azure.com/org/proj/";
$createEndpointUri = "$($baseUri)_apis/serviceendpoint/endpoints?api-version=5.0-preview.2";


$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("token:{0}" -f $devOpsPAT)))
$DevOpsHeaders = @{Authorization = ("Basic {0}" -f $base64AuthInfo)};

$AzureSubscriptionData = New-Object PSObject -Property @{            
                            authorizationType = "AzureSubscription"
                            azureSubscriptionId = $SubscriptionId
                            azureSubscriptionName = $subscriptionName
                            clusterId = $clusterId
                            };
$Authorization = New-Object PSObject -Property @{
                            parameters = New-Object PSObject -Property @{            
                                azureEnvironment = "AzureCloud"
                                azureTenantId = "$tenantID"
                                };
                            scheme = "Kubernetes"
                            };

$ServiceEndpointBody = New-Object PSObject -Property @{            
                            authorization =$Authorization
                            data = $AzureSubscriptionData
                            name = $serviceConnectionName
                            type = "kubernetes"
                            url = $k8sUrl
                            isReady = "true"
                            };

$jsonbody = $ServiceEndpointBody | ConvertTo-Json -Depth 100


Invoke-RestMethod -UseBasicParsing -Uri $createEndpointUri -Method Post -ContentType "application/json" -Headers $DevOpsHeaders -Body $jsonbody;

Solution:

You can usually figure this stuff out by doing the operation in the Azure DevOps UI and inspecting the HTTP requests it makes using (for example) Chrome debugging tools.

In this case, I think you first need to create the service connection and then make a PATCH request to the pipelinePermissions endpoint, setting the allPipelines.authorized flag to true.

URI

PATCH https://dev.azure.com/{organisation}/{project}/_apis/pipelines/pipelinePermissions/endpoint/{endpointId}?api-version=5.1-preview.1

Patch Request Body

{
    "allPipelines": {
        "authorized": true,
        "authorizedBy": null,
        "authorizedOn": null
    },
    "pipelines": null,
    "resource": {
        "id": "{endpointid}",
        "type": "endpoint"
    }
}

Powershell

Invoke-RestMethod -Method PATCH -Uri "{uriasabove}" -Headers $headers -Body "{patchbodyasabove}" -ContentType "application/json"

Need help iterating through Python dict keys/values and INSERTing into SQL DB

I made a call to request data from Weight Gurus, which returns in the format of a python dictionary with keys and values of course. I need to take the data retrieved from this call and INSERT each key/value pair as an individual row.

So far I have managed to get the data from Weight Gurus and also establish a connection to my DB within python, but no luck with iterating through the dict to INSERT each value pair into an individual row.


# Login and get the auth token
data = {"email": "", "password": ""}
login_response = requests.post("https://api.weightgurus.com/v3/account/login", data=data)
login_json = login_response.json()

# grab all your data
data_response = requests.get(
    "https://api.weightgurus.com/v3/operation/",
    headers={
        "Authorization": f'Bearer {login_json["accessToken"]}',
        "Accept": "application/json, text/plain, */*",
    },
)

scale_data_json = data_response.json()
for entry in scale_data_json["operations"]:
    print(entry)


import pyodbc    
server = ''
database = ''
username = ''
password = ''
driver='{ODBC Driver 13 for SQL Server}'

cnxn = pyodbc.connect('DRIVER='+driver+';SERVER='+server+';PORT=1433;DATABASE='+database+';UID='+username+';PWD='+ password)
cursor = cnxn.cursor()

The dictionary in question is comprised of 9 keys. Each key is a column within my table called BodyComposition. Each key value pair should be an individual row. My table also has an increment ID field for the primary key if that makes a difference.

Solution:

Consider unpacking your collection of dictionaries into key/value tuples and then parameterize the values tuple in the loop. Assuming the below data structure (list of dictionaries):

scale_data_json["operations"] = [{'BMI': 0, 'BodyFat': 10, 
                                  'Entrytimestamp': '2018-01-21T19:37:47.821Z', 
                                  'MuscleMass': 50, 'OperationType': 'create',
                                  'ServerTimestamp':'2018-01-21T19:37:47.821Z', 
                                  'Source':'bluetooth scale', 
                                  'Water':37, 'Weight':21},
                                 {'BMI': 0, 'BodyFat': 10, 
                                  'Entrytimestamp': '2018-01-21T19:37:47.821Z', 
                                  'MuscleMass': 50, 'OperationType': 'create',
                                  'ServerTimestamp':'2018-01-21T19:37:47.821Z', 
                                  'Source':'bluetooth scale', 
                                  'Water':37, 'Weight':21},
                                ...]

Loop through each dictionary, unpack the values with zip and then bind them in cursor.execute:

# PREPARED STATEMENT
sql = """INSERT INTO BodyComposition (BMI, BodyFat, Entrytimestamp, 
                                      MuscleMass, OperationType, ServerTimestamp, 
                                      Source, Water, Weight) 
         VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?)
      """

# LOOP, UNPACK, BIND PARAMS
for entry in scale_data_json["operations"]:
    keys, values = zip(*entry.items())
    cursor.execute(sql, values)
    cnxn.commit()

Why is Azure Active Directory used?

I am extremely new to Azure. I got azure account for free through GitHub Students Developer Pack. I have gone through some tutorials on YouTube.

Most of them do not explain why Azure Active Directory is used. So I would like to know why is Azure Active Directory used?

Solution:


What is Azure Active Directory?

Azure Active Directory is an Identity and Access Management system. It is used to grant access to your employees to specific products and services in your network. For example: Salesforce.com, twitter etc. Azure AD has some in-built support for applications in its gallery which can be added directly.


Why is it used?

It is used because it can be easily integrated with ADFS, Azure AD Accounts. It can also provide Single sign-on functionality.