<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Debug This | Automating DevOps</title>
    <link>https://debugthis.dev/</link>
    <description>Recent content on Debug This | Automating DevOps</description>
    <generator>Hugo -- gohugo.io</generator>
    <language>en-us</language>
    <lastBuildDate>Thu, 19 Oct 2017 15:26:15 +0000</lastBuildDate><atom:link href="https://debugthis.dev/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Spotify desktop notifications - Part 1</title>
      <link>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt1/</link>
      <pubDate>Mon, 05 Sep 2022 05:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt1/</guid>
      <description>Some background The Spotify desktop client on Linux which I had installed a long time ago used to emit a desktop notification, whenever a next track would play.
This notification would work for a few tracks then stop displaying for some reason (the debug logs were not helpful to identify the cause).
Another issue is that the deskstop client is closed-source so there was no way to analyse any code.</description>
    </item>
    
    <item>
      <title>1 - Introduction</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-the-hard-way/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-the-hard-way/</guid>
      <description>Overview To better understand and expand my knowledge using Kubernetes I decided to follow the excellent walkthrough put together by Kelsey Hightower over on github.
This repository has over 28.3k stars as of writing so it is a very popular guide to follow.
I had already learned how to use Kubernetes but the guide put together by Kelsey demonstrated how to setup a Production like Kubernetes cluster.
Understanding how Kubernetes works by building a cluster gave me a better insight into the how each component works.</description>
    </item>
    
    <item>
      <title>Setting up Ansible AWX using a docker environment - Part 1 (the Ansible approach)</title>
      <link>https://debugthis.dev/awx/2019-10-02-running-ansible-awx-in-a-docker-environment/</link>
      <pubDate>Wed, 02 Oct 2019 12:50:05 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2019-10-02-running-ansible-awx-in-a-docker-environment/</guid>
      <description>A guide on how to get Ansible AWX (open source version of Ansible Tower) running in an isolated docker environment.
Pre-setup System requirements Supported Operating Systems:
Red Hat Enterprise Linux 6 64-bit Red Hat Enterprise Linux 7 64-bit CentOS 6 64-bit CentOS 7 64-bit Ubuntu 12.04 LTS 64-bit Ubuntu 14.04 LTS 64-bit Hardware requirements:
2 CPUs minimum for Tower installations 20 GB hard disk 4 GB RAM minimum for Tower installations For Amazon EC2:</description>
    </item>
    
    <item>
      <title>D-Bus - Part 2</title>
      <link>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt2/</link>
      <pubDate>Mon, 05 Sep 2022 05:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt2/</guid>
      <description>D-Bus Before extending the code I wanted to understand what D-Bus was.
D-Bus is a message bus system, a simple way for applications to talk to one another.
D-Bus runs in 2 instances; session specific and system wide.
So, we do have some form of a message bus system which will allow this Rust program to consume information like:
Did the track change ?
Was the media paused ?
Did the media stop ?</description>
    </item>
    
    <item>
      <title>2 - Client tools</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-client-tools/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-client-tools/</guid>
      <description>Install cfssl, cfssl-json, kubectl Bash script:
cfssl_version=&amp;#34;1.6.1&amp;#34; cfssl_json_version=&amp;#34;1.6.1&amp;#34; install_dest=&amp;#34;/usr/local/bin&amp;#34; if [ -d &amp;#34;.tools&amp;#34; ]; then rm -rf &amp;#34;.tools&amp;#34; fi # make a temporary directory to hold binaries mkdir .tools echo &amp;#34;::Downloading cfssl &amp;amp; cfssl-json&amp;#34; curl -L -s https://github.com/cloudflare/cfssl/releases/download/v${cfssl_version}/cfssl_${cfssl_version}_linux_amd64 &amp;gt; &amp;#34;.tools/cfssl&amp;#34; &amp;amp;&amp;amp; chmod +x &amp;#34;.tools/cfssl&amp;#34; curl -L -s https://github.com/cloudflare/cfssl/releases/download/v${cfssl_json_version}/cfssljson_${cfssl_json_version}_linux_amd64 &amp;gt; &amp;#34;.tools/cfssl-json&amp;#34; &amp;amp;&amp;amp; chmod +x &amp;#34;.tools/cfssl-json&amp;#34; if [ $? -ne 0 ]; then echo &amp;#34;Download failed.&amp;#34; exit 1 fi echo &amp;#34;Done.&amp;#34; # 2 echo &amp;#34;::Downloading kubectl&amp;#34; curl -L -s &amp;#34;https://dl.</description>
    </item>
    
    <item>
      <title>Album Art - Part 3</title>
      <link>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt3/</link>
      <pubDate>Mon, 05 Sep 2022 05:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt3/</guid>
      <description>Downloading images The next challenge encountered was downloading images to be used later in the notification for the Album Art.
Again, an existing Rust Crate allowed to me make HTTP Get requests and process the necessary response.
isahc is a HTTP client library which serves the purpose of making HTTP requests in Rust seamlessly.
Example.
First, create a new HTTP client instance.
Here you can customize a wide range of properties .</description>
    </item>
    
    <item>
      <title>3 - Network resources</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-network/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-network/</guid>
      <description>Create VPC network resource &amp;#34;google_compute_network&amp;#34; &amp;#34;vpc_network&amp;#34; { name = var.vpc_name auto_create_subnetworks = false description = &amp;#34;k8s cluster terraform&amp;#34; } Create Private Subnet resource &amp;#34;google_compute_subnetwork&amp;#34; &amp;#34;private_network_1&amp;#34; { name = var.private_subnet_name ip_cidr_range = var.private_ip_cidr_range network = google_compute_network.vpc_network.id } Create firewall to allow only internal TCP, UDP, ICMP traffic resource &amp;#34;google_compute_firewall&amp;#34; &amp;#34;firewall_allow_internal&amp;#34; { name = var.firewall_allow_internal_name network = google_compute_network.vpc_network.name allow { protocol = &amp;#34;tcp&amp;#34; } allow { protocol = &amp;#34;udp&amp;#34; } allow { protocol = &amp;#34;icmp&amp;#34; } source_ranges = var.</description>
    </item>
    
    <item>
      <title>SQLite Database</title>
      <link>https://debugthis.dev/rust/sqlite/</link>
      <pubDate>Wed, 07 Sep 2022 00:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/sqlite/</guid>
      <description>A few steps to show how to use Rust to open a SQLite database and run a query.
The examples which follow make use of a database called &amp;lsquo;shows&amp;rsquo;.
This database contains a table called &amp;lsquo;schedule&amp;rsquo; - a list of TV Shows made up of the following columns:
Column Type id NUMERIC NOT NULL UNIQUE name TEXT NOT NULL airing INTEGER PRIMARY KEY = &amp;ldquo;id&amp;rdquo;
SQLite Rust Crate An existing Crate called sqlite was used for managing the database.</description>
    </item>
    
    <item>
      <title>Notification - Part 4</title>
      <link>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt4/</link>
      <pubDate>Mon, 05 Sep 2022 05:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/spotify-desktop-notification/spotify-notifications-dbus-pt4/</guid>
      <description>Notification An existing Rust Crate called notify_rust helped with building and displaying a desktop notification.
In the example below.
Track = a struct to store Track information.
pub struct Track { pub artist: String, pub album: String, pub album_art: String, pub title: String, } A new notification is created and then later shown on screen.
Note. The use of handle.update(); since you don&amp;rsquo;t want to display dozens of notification pop-ups on screen.</description>
    </item>
    
    <item>
      <title>4.1 - Compute instances</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-instances/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-instances/</guid>
      <description>The number of instances are controlled via the vars:
# Number of Kubernetes Worker node instances to create &amp;#34;nodes&amp;#34; { type = number } # Number of Kubernetes Controllers instances to create &amp;#34;controllers&amp;#34; { type = number } Create Kubernetes Controllers # Creates the Kubernetes Controller and Worker nodes # Controllers instance resource &amp;#34;google_compute_instance&amp;#34; &amp;#34;k8s_controller&amp;#34; { count = var.controllers name = &amp;#34;controller-${count.index + 1}&amp;#34; machine_type = var.gce_machine_type boot_disk { initialize_params { image = var.</description>
    </item>
    
    <item>
      <title>4.2 - Terraform outputs</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-output/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-output/</guid>
      <description>output &amp;#34;controller-ext-ip&amp;#34; { description = &amp;#34;Output Controller: External IP Addresses&amp;#34; value = google_compute_instance.k8s_controller[*].network_interface[0].access_config[0].nat_ip } output &amp;#34;controller-int-ip&amp;#34; { description = &amp;#34;Output Controller: Internal IP Addresses&amp;#34; value = google_compute_instance.k8s_controller[*].network_interface[0].network_ip } output &amp;#34;worker-node-ext-ip&amp;#34; { description = &amp;#34;Output Worker Node: External IP Addresses&amp;#34; value = google_compute_instance.k8s_node[*].network_interface[0].access_config[0].nat_ip } output &amp;#34;worker-node-int-ip&amp;#34; { description = &amp;#34;Output Worker Node: Internal IP Addresses&amp;#34; value = google_compute_instance.k8s_node[*].network_interface[0].network_ip } output &amp;#34;kubernetes-external-ip&amp;#34; { description = &amp;#34;Output Kubernetes public IP-Address&amp;#34; value = google_compute_address.</description>
    </item>
    
    <item>
      <title>KDE: Bing image as a wallpaper</title>
      <link>https://debugthis.dev/rust/wallpaper/</link>
      <pubDate>Thu, 15 Sep 2022 00:20:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/rust/wallpaper/</guid>
      <description>A post on describing how to update the desktop wallpaper in KDE taking images from https://bing.com using Rust.
Bing image of the day The image changes daily and can be reached via: https://www.bing.com/HPImageArchive.aspx?format=js&amp;amp;idx=0&amp;amp;n=1
Sending n=1 at the end of the URL restricts only 1 image (latest) to be returned.
This returns a JSON response.
{ &amp;#34;images&amp;#34;:[ { &amp;#34;startdate&amp;#34;:&amp;#34;20220914&amp;#34;, &amp;#34;fullstartdate&amp;#34;:&amp;#34;202209142300&amp;#34;, &amp;#34;enddate&amp;#34;:&amp;#34;20220915&amp;#34;, &amp;#34;url&amp;#34;:&amp;#34;/th?id=OHR.PyreneesPark_EN-GB9616848199_1920x1080.jpg&amp;amp;rf=LaDigue_1920x1080.jpg&amp;amp;pid=hp&amp;#34;, &amp;#34;urlbase&amp;#34;:&amp;#34;/th?id=OHR.PyreneesPark_EN-GB9616848199&amp;#34;, &amp;#34;copyright&amp;#34;:&amp;#34;Roland&amp;#39;s Breach in the Pyrenees, France (© SPANI Arnaud/Alamy)&amp;#34;, &amp;#34;copyrightlink&amp;#34;:&amp;#34;https://www.</description>
    </item>
    
    <item>
      <title>4.3 - Terraform variables</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-vars/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-tf-vars/</guid>
      <description># Google project id variable &amp;#34;gcp_project_id&amp;#34; {} # Region variable &amp;#34;gcp_region&amp;#34; {} # Zone variable &amp;#34;gcp_zone&amp;#34; {} # VPC Name variable &amp;#34;vpc_name&amp;#34; {} # Name of private subnet variable &amp;#34;private_subnet_name&amp;#34; {} # Private IP CIDR range variable &amp;#34;private_ip_cidr_range&amp;#34; {} # Private IP CIRD range for the Kubernetes Pod&amp;#39;s variable &amp;#34;private_ip_pod_cidr_range&amp;#34; {} # Name of the firewall to allow internal traffic variable &amp;#34;firewall_allow_internal_name&amp;#34; {} # Allows range of interal IP Addresses e.</description>
    </item>
    
    <item>
      <title>5 - Certificate Authority</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-certificate-auth/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-certificate-auth/</guid>
      <description>Generate SSL Certificates via a Python script.
Which makes system calls to invoke the Cloudflare cfssl and cfssl-json command line tools to provision a PKI Infrastructure.
The certificates are provisioned for the following components:
etcd kube-apiserver kube-controller-manager kube-scheduler kubelet The script can be found in the GitLab repository: 01-generate-certs.py</description>
    </item>
    
    <item>
      <title>6.1 - Configuration</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-config/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-config/</guid>
      <description>A Python script generates kubeconfig configuration files for the following Kubernetes components:
Worker Nodes Kube Proxy Controller Manager Kube Scheduler Admin User The script make system calls to kubectl with appropriate flags to generate each configuration file.
The generated configuration files are then saved to a directory called k8s-conf.
The generated files are then transferred to the Compute instances using Ansible.
The script used to generate the kubeconfig files can be found in the GitLab repository: 02-generate-kubeconfig.</description>
    </item>
    
    <item>
      <title>6.2 - Transfer certificates</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-config-ansible/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-config-ansible/</guid>
      <description>Ansible tasks to transfer TLS Certificates and Kubernetes configuration across to the Worker nodes and Controllers.
This script can be found in the GitLab repository: main.yml</description>
    </item>
    
    <item>
      <title>7 - Data Encryption</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-data-enc/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-data-enc/</guid>
      <description>This Python script generates an encryption key and an encryption configuration file suitable for encrypting Kubernetes Secrets.
The generated files are then transferred to the Compute instances using Ansible.
#!/usr/bin/env python import random import string import base64 import os template_file = &amp;#34;../templates/encryption-config.yaml&amp;#34; conf = &amp;#34;../k8s-conf&amp;#34; outfile = conf + &amp;#34;/&amp;#34; + &amp;#34;encryption-config.yaml&amp;#34; print(&amp;#34;:: Generating Data Encryption Config and Key.&amp;#34;) # clean existing file if os.path.exists(outfile): os.remove(outfile) randstr = &amp;#34;&amp;#34;.join( random.SystemRandom().choice(string.ascii_letters + string.</description>
    </item>
    
    <item>
      <title>8 - Bootstrap etcd</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-etcd/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-etcd/</guid>
      <description>A set of Ansible tasks required to bootstrap an etcd server, a vital component of the Kubernetes cluster.
Without an etcd server Kubernetes would not know how to remember the state (e.g. number of worker nodes, health..) of the cluster.
The script can be found in the GitLab repository: main.yml</description>
    </item>
    
    <item>
      <title>9 - Bootstrap Controllers</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-controllers/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-controllers/</guid>
      <description>A set of Ansible tasks required to bootstrap Controllers, a vital component of the Kubernetes cluster.
The following components will be installed on each node: Kubernetes API Server, Scheduler, and Controller Manager.
For High Availability its recommended to bootstrap a minimum of 3 Controllers.
As part of the Ansible tasks, Nginx is also installed to support health checks for an external loadbalancer.
The loadbalancer exposes the Kubernetes API server, Scheduler and Controller Manager to remote clients.</description>
    </item>
    
    <item>
      <title>10 - Bootstrap Worker Nodes</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-workers/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-ansible-bootstrap-workers/</guid>
      <description>A set of Ansible tasks required to bootstrap Worker nodes.
Worker nodes are where Kubernetes pods are hosted on.
The script can be found in the GitLab repository: main.yml</description>
    </item>
    
    <item>
      <title>11 - Pod network routing</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-pod-network/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-pod-network/</guid>
      <description>Terraform was used to create a route for each Worker node so that the pods running on each node can communicate with each other.
This creates a route for each Worker node, which is why there is a count variable to iterate over the number of Worker nodes.
# route packets resource &amp;#34;google_compute_route&amp;#34; &amp;#34;route&amp;#34; { count = var.nodes name = &amp;#34;kubernetes-route-10-200-${count.index + 1}-0-24&amp;#34; network = google_compute_network.vpc_network.name next_hop_ip = &amp;#34;10.240.0.2${count.index + 1}&amp;#34; dest_range = &amp;#34;10.</description>
    </item>
    
    <item>
      <title>12 - Kubectl remote config</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-kubectl-remote/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-kubectl-remote/</guid>
      <description>A simple bash script sufficed in configuring Kubectl to connect remotely to the Kubernetes cluster.
The script can be found in the GitLab repository: 05-kubectl-remote.sh
#!/usr/bin/env bash set -euo pipefail CONF=&amp;#34;$1&amp;#34; echo &amp;#34;::Setting kubectl remote access.&amp;#34; if [ -f &amp;#34;$CONF&amp;#34; ]; then KUBERNETES_PUBLIC_ADDRESS=$(cat $CONF | python -c &amp;#39;import json,sys;obj=json.load(sys.stdin);print(obj[0][&amp;#34;staticExternalIP&amp;#34;])&amp;#39;) KUBERNETES_CLUSTER_NAME=$(cat $CONF | python -c &amp;#39;import json,sys;obj=json.load(sys.stdin);print(obj[0][&amp;#34;clusterName&amp;#34;])&amp;#39;) KUBERNETES_CERTS=$(cat $CONF | python -c &amp;#39;import json,sys;obj=json.load(sys.stdin);print(obj[0][&amp;#34;certificatesPath&amp;#34;])&amp;#39;) else echo &amp;#34;ERROR &amp;gt; Failed to locate Kubernetes cluster json file.</description>
    </item>
    
    <item>
      <title>13 - DNS</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-dns-addon/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-dns-addon/</guid>
      <description>A simple bash script sufficed in configuring deployment of CoreDNS into the Kubernetes cluster.
The script can be found in the GitLab repository: 06-dns-addon.sh
NOTE: The following commands were taken directly from the repository Kubernetes The Hard Way. #!/usr/bin/env bash set -euo pipefail # DNS Addon # Deploying the DNS cluster addon # DNS add-on which provides DNS based service discovery, backed by CoreDNS, to applications running inside the Kubernetes cluster.</description>
    </item>
    
    <item>
      <title>14 - Smoketest</title>
      <link>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-smoketest/</link>
      <pubDate>Tue, 17 Aug 2021 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/thehardway/2021-05-12-k8s-thw-gcp-k8s-smoketest/</guid>
      <description>To smoke test the Kubernetes cluster.
NOTE: The following commands were taken directly from the repository Kubernetes The Hard Way. Create a generic secret: kubectl create secret generic kubernetes-the-hard-way \ --from-literal=&amp;#34;mykey=mydata&amp;#34; Print a hexdump of the kubernetes-the-hard-way secret stored in etcd: ssh onto controller-1, then run:
sudo ETCDCTL_API=3 etcdctl get \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/etcd/ca.pem \ --cert=/etc/etcd/kubernetes.pem \ --key=/etc/etcd/kubernetes-key.pem\ /registry/secrets/default/kubernetes-the-hard-way | hexdump -C Check deployments Create a deployment for the nginx web server: kubectl create deployment nginx --image=nginx List the pod created by the nginx deployment: kubectl get pods -l app=nginx Port Forwarding In this section you will verify the ability to access applications remotely using port forwarding.</description>
    </item>
    
    <item>
      <title>Sofirem</title>
      <link>https://debugthis.dev/gtk/software-app/</link>
      <pubDate>Tue, 04 Jul 2023 15:26:15 +0000</pubDate>
      
      <guid>https://debugthis.dev/gtk/software-app/</guid>
      <description>Some brief history I have been contributing my time to an Open Source project called ArcoLinux.
ArcoLinux provides a customized Linux distribution built on top of Arch Linux.
You can find more information over on the Arch Linux Wiki
The journey started by answering technical support questions over on the Forum. These issues ranged from users experiencing unique hardware related problems to software problems. Not all were specific to Arch Linux, but I was happy to use my skills to help out.</description>
    </item>
    
    <item>
      <title>Minikube Insecure Docker Registry</title>
      <link>https://debugthis.dev/k8s/2021-11-11-k8s-insecure-registry/</link>
      <pubDate>Thu, 11 Nov 2021 10:53:11 +0000</pubDate>
      
      <guid>https://debugthis.dev/k8s/2021-11-11-k8s-insecure-registry/</guid>
      <description>Setting up a private registry inside a minikube environment Create deployment kubectl create deployment registry --image=registry Expose deployment kubectl expose deploy/registry --port=5000 --type=NodePort Capture the nodeport plus the minikup ip vi ~/.minikube/machines/minikube/config.json Add the line with your minikube IP-Address along with NodePort &amp;#34;InsecureRegistry&amp;#34;: [ &amp;#34;10.96.0.0/12&amp;#34;, &amp;#34;10.0.0.0/24&amp;#34;, &amp;#34;192.168.49.2:32671&amp;#34; Stop minikube &amp;amp; docker stop minikube stop docker Edit /lib/systemd/system/docker.service Add the insecure-registry line vi /lib/systemd/system/docker.service ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --insecure-registry 192.168.49.2:32671 Start docker &amp;amp; minikube start docker start minikube </description>
    </item>
    
    <item>
      <title>Fetching data asynchronously</title>
      <link>https://debugthis.dev/cloudflare-workers/2020-10-12-cloudflare-workers-fetch/</link>
      <pubDate>Tue, 13 Oct 2020 09:37:04 +0100</pubDate>
      
      <guid>https://debugthis.dev/cloudflare-workers/2020-10-12-cloudflare-workers-fetch/</guid>
      <description>Fetching data The Runtime API supports fetch which provides an interface for asynchronously fetching resources via HTTP requests inside of a Worker.
Python example The examples which follows show how a Worker interacts with the Hacker News API, then return HTML content.
handleRequest URL = &amp;#34;https://hacker-news.firebaseio.com/v0/topstories.json?print=pretty&amp;#34; STORY = &amp;#34;https://hacker-news.firebaseio.com/v0/item/{}.json?print=pretty&amp;#34; async def handleRequest(request): response = await fetch(URL) storyItems = await gatherResponse(response) storyBody = await getStoryBody(storyItems) htmlBody = await generateHTML(storyBody) return __new__(Response(htmlBody,{ &amp;#39;headers&amp;#39; : { &amp;#39;Content-Type&amp;#39; : &amp;#39;text/html;charset=UTF-8&amp;#39; } })) getStoryBody This function makes a request for the story data (score, title)</description>
    </item>
    
    <item>
      <title>Python Cloudflare Workers</title>
      <link>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers/</link>
      <pubDate>Thu, 08 Oct 2020 09:37:04 +0100</pubDate>
      
      <guid>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers/</guid>
      <description>Cloudflare Workers Clouflare Workers, provides a serverless environment without configuring or maintaining infrastructure.
You can create new applications or reuse existing ones.
Initially Cloudflare Workers only allowed you to code in JavaScript and languages that compile to WebAssembly (Rust,C, C++) to develop applications.
However, they now support a broad range of languages.
One of those languages is Python.
In this tutorial I shall demonstrate how to setup a Python orientated Cloudflare Workers development environment.</description>
    </item>
    
    <item>
      <title>Return HTML</title>
      <link>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers-html/</link>
      <pubDate>Thu, 08 Oct 2020 09:37:04 +0100</pubDate>
      
      <guid>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers-html/</guid>
      <description>Python example Returning HTML via a Cloudflare Worker.
def handleRequest(request): imgSrc=&amp;#34;https://images.pexels.com/photos/1033729/pexels-photo-1033729.jpeg&amp;#34; # defines a html string for the JSON content html=&amp;#34;&amp;#34;&amp;#34; &amp;lt;!DOCTYPE html&amp;gt; &amp;lt;html&amp;gt; &amp;lt;head&amp;gt; &amp;lt;script&amp;gt; function msg() { document.getElementById(&amp;#34;content&amp;#34;).innerHTML = &amp;#34;Hello World.&amp;#34;; } &amp;lt;/script&amp;gt; &amp;lt;/head&amp;gt; &amp;lt;body&amp;gt; &amp;lt;h1&amp;gt;Hello World.&amp;lt;/h1&amp;gt; &amp;lt;p&amp;gt;This markup was generated by a Cloudflare Worker.&amp;lt;/p&amp;gt; &amp;lt;img src=&amp;#34;{}&amp;#34; height=&amp;#34;20px&amp;#34; width=&amp;#34;20px&amp;#34;/&amp;gt; &amp;lt;p&amp;gt; &amp;lt;form&amp;gt; &amp;lt;input type=&amp;#34;button&amp;#34; value=&amp;#34;Click me&amp;#34; onclick=&amp;#34;msg()&amp;#34;&amp;gt; &amp;lt;/form&amp;gt; &amp;lt;/p&amp;gt; &amp;lt;div id=&amp;#34;content&amp;#34;&amp;gt; &amp;lt;/div&amp;gt; &amp;lt;/body&amp;gt; &amp;lt;/html&amp;gt; &amp;#34;&amp;#34;&amp;#34;.format(imgSrc) # a new Response return __new__(Response(html, { &amp;#39;headers&amp;#39; : { &amp;#39;content-type&amp;#39; : &amp;#39;text/html;charset=UTF-8&amp;#39;} })) # call the Worker runtime addEventListener(&amp;#39;fetch&amp;#39;, (lambda event: event.</description>
    </item>
    
    <item>
      <title>Return JSON</title>
      <link>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers-json/</link>
      <pubDate>Thu, 08 Oct 2020 09:37:04 +0100</pubDate>
      
      <guid>https://debugthis.dev/cloudflare-workers/2020-10-08-cloudflare-workers-json/</guid>
      <description>Python example def handleRequest(request): # defines a dictionary object for the JSON content json_str = { &amp;#34;message&amp;#34;: &amp;#34;Python Worker hello world!&amp;#34; } # a new Response return __new__(Response(json_str, { &amp;#39;headers&amp;#39; : { &amp;#39;content-type&amp;#39; : &amp;#39;application/json;charset=UTF-8&amp;#39; } })) # call the Worker runtime addEventListener(&amp;#39;fetch&amp;#39;, (lambda event: event.respondWith(handleRequest(event.request)))) &amp;nbsp;Note When running this Worker I ran into the following error: SyntaxError: JSON.parse: unexpected character at line 1 column 2 of the JSON data</description>
    </item>
    
    <item>
      <title>Lambda with DynamoDB and API Gateway</title>
      <link>https://debugthis.dev/cdk/2020-08-21-aws-cdk-lambda-dynamodb/</link>
      <pubDate>Fri, 21 Aug 2020 10:44:51 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-08-21-aws-cdk-lambda-dynamodb/</guid>
      <description>Introduction Using the CDK to define the following AWS resources.
DynamoDB table API Gateway Lambda Function Lambda Role Steps taken from Lambda API Gateway.
Assets apigw-dynamodb.js
The following example code receives a API Gateway event input and processes the messages that it contains.
console.log(&amp;#39;Loading function&amp;#39;); var AWS = require(&amp;#39;aws-sdk&amp;#39;); var dynamo = new AWS.DynamoDB.DocumentClient(); /** * Provide an event that contains the following keys: * * - operation: one of the operations in the switch statement below * - tableName: required for operations that interact with DynamoDB * - payload: a parameter to pass to the operation being performed */ exports.</description>
    </item>
    
    <item>
      <title>Lambda with CodePipeline</title>
      <link>https://debugthis.dev/cdk/2020-08-21-aws-cdk-lambda-code-pipeline/</link>
      <pubDate>Fri, 21 Aug 2020 10:04:46 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-08-21-aws-cdk-lambda-code-pipeline/</guid>
      <description>Introduction Building a continuous delivery pipeline for a Lambda application with AWS CodePipeline.
Using the steps available from the AWS Lambda developer guide.
The CDK helped define the following AWS resources.
CodeCommit repository CodePipeline S3 Bucket Assets The following assets are required for this example.
index.js A Lambda function that returns the current time.
var time = require(&amp;#39;time&amp;#39;); exports.handler = (event, context, callback) =&amp;gt; { var currentTime = new time.Date(); currentTime.</description>
    </item>
    
    <item>
      <title>Setting up a Jenkins build server</title>
      <link>https://debugthis.dev/cdk/2020-08-21-aws-cdk-jenkins/</link>
      <pubDate>Fri, 21 Aug 2020 09:32:03 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-08-21-aws-cdk-jenkins/</guid>
      <description>Introduction Using Python and the CDK to setup a Jenkins build server.
2 stacks were created for this example.
These stacks help define the following AWS components:
1. Network Stack VPC Public Subnets Routing table Internet gateway 2. EC2 Stack Auto Scaling Group Security Groups Ansible was used to install Jenkins on the EC2 instance.
app.py #!/usr/bin/env python3 from aws_cdk import core from EC2Stack import EC2Stack from NetworkStack import NetworkStack props = { &amp;#39;namespace&amp;#39;:&amp;#39;JenkinsBuildServer&amp;#39;, &amp;#39;vpc_name&amp;#39;:&amp;#39;devops&amp;#39;, &amp;#39;ec2_instance_name&amp;#39;:&amp;#39;jenkins-build-server&amp;#39;, &amp;#39;wan_ip&amp;#39;:&amp;#39;your_ip_address&amp;#39;, &amp;#39;ec2_instance_type&amp;#39; : &amp;#39;t2.</description>
    </item>
    
    <item>
      <title>Creating an Elastic Load Balancer</title>
      <link>https://debugthis.dev/cdk/2020-08-21-aws-cdk-elb/</link>
      <pubDate>Fri, 21 Aug 2020 09:00:14 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-08-21-aws-cdk-elb/</guid>
      <description>Introduction Using Python to use the CDK to create an Application Load Balancer.
The Load Balancer will have an Auto Scaling Group as the target.
The EC2 instance (webserver) inside the Auto Scaling Group will have httpd installed, and serve a static index page.
This allows the Load Balancer to use a health check to see whether the page returns a 200 (OK) status.
The Load Balancer will expose a public dns name, this allows us to access the webserver.</description>
    </item>
    
    <item>
      <title>Creating a RDS instance</title>
      <link>https://debugthis.dev/cdk/2020-07-16-aws-cdk-rds/</link>
      <pubDate>Thu, 16 Jul 2020 09:21:34 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-07-16-aws-cdk-rds/</guid>
      <description>Introduction The manual setup steps taken from the AWS docs on Getting started with Amazon RDS.
Have been automated using the CDK.
Examples follows in Python.
3 stacks were created for this example.
NetworkStack [ Creates VPC, subnets, security groups, NAT gateway ]
EC2Stack [ Creates a new EC2 instance (webserver) ]
RDSDBStack [ Creates a new RDS instance ]
Extra resources.
The getting started guide uses a webserver to connect to the DB instance.</description>
    </item>
    
    <item>
      <title>Elastic Beanstalk</title>
      <link>https://debugthis.dev/cdk/2020-07-08-aws-cdk-elastic-beanstalk/</link>
      <pubDate>Wed, 08 Jul 2020 10:45:25 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-07-08-aws-cdk-elastic-beanstalk/</guid>
      <description>Introduction The manual setup steps taken from the AWS docs on Getting started using Elastic Beanstalk.
Have been automated using the CDK.
Examples follows in Python.
3 Stacks were created.
BeanstalkS3Stack [ uploads assets to S3 ]
BeanstalkAppStack [ creates, deploy new app ]
BeanstalkEnvStack [ creates new environment ]
By default the entry point app.py is created during the init stage of creating the CDK project.
This is where you define which stacks to use.</description>
    </item>
    
    <item>
      <title>Bootstrap Errors</title>
      <link>https://debugthis.dev/cdk/2020-07-08-aws-cdk-errors/</link>
      <pubDate>Wed, 08 Jul 2020 10:23:24 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-07-08-aws-cdk-errors/</guid>
      <description>This page contains a list of CDK error messages, and a way to fix them.
As more errors are encountered and fixed this page will be updated accordingly.
Errors [100%] fail: No bucket named &amp;lsquo;cdktoolkit-stagingbucket-########&amp;rsquo;. Is account ######### bootstrapped? Cause: CDK requires a staging bucket in S3 containing CDK assets, which may have been removed.
Investigation:
Running aws s3 ls does not yield the bucket. Fix:
Navigate to the AWS management console</description>
    </item>
    
    <item>
      <title>Creating a Code deploy pipeline</title>
      <link>https://debugthis.dev/cdk/2020-30-06-aws-cdk-code-pipeline/</link>
      <pubDate>Tue, 30 Jun 2020 16:45:53 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-30-06-aws-cdk-code-pipeline/</guid>
      <description>Introduction The manual setup steps taken from the AWS docs on creating a simple pipeline (CodeCommit repository).
Have been automated using the CDK.
Examples follows in Python.
Create a CodeCommit repository # import packages from aws_cdk import ( core, aws_codecommit as code_commit ) # creates a CodeCommit repository code_commit.Repository(self, &amp;#34;CodeCommitRepository&amp;#34;,repository_name=&amp;#34;MyDemoRepo&amp;#34;) Create an EC2 Linux instance and install the CodeDeploy agent from aws_cdk import ( core, aws_ec2 as ec2, aws_iam as iam ) vpcId = &amp;#34;&amp;lt;replace with vpc id&amp;gt;&amp;#34; instanceName = &amp;#34;MyCodePipelineDemo&amp;#34; instanceType = &amp;#34;t2.</description>
    </item>
    
    <item>
      <title>NASA Astronomy Picture of the Day</title>
      <link>https://debugthis.dev/go/2020-06-25-nasa-apod/</link>
      <pubDate>Thu, 25 Jun 2020 14:03:59 +0100</pubDate>
      
      <guid>https://debugthis.dev/go/2020-06-25-nasa-apod/</guid>
      <description>Here is my first attempt of building a single HTML page, using the RSS feed taken from NASA Astronomy Picture of the Day (APOD)
The application written in Go carries out the following.
makes a HTTP request for the RSS feed
process the XML feed data
display feed data on a single HTML page
RSS Feed Here&amp;rsquo;s what the feed data looks like.
&amp;lt;?xml version=&amp;#34;1.0&amp;#34; encoding=&amp;#34;UTF-8&amp;#34;?&amp;gt; &amp;lt;rss version=&amp;#34;2.0&amp;#34;&amp;gt; &amp;lt;channel&amp;gt; &amp;lt;title&amp;gt;APOD&amp;lt;/title&amp;gt; &amp;lt;link&amp;gt;https://apod.</description>
    </item>
    
    <item>
      <title>Creating an EC2 instance</title>
      <link>https://debugthis.dev/cdk/2020-25-06-aws-cdk-ec2/</link>
      <pubDate>Thu, 25 Jun 2020 10:48:06 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-25-06-aws-cdk-ec2/</guid>
      <description>Introduction In this post I shall demonstrate how easy it is to create an EC2 instance, and a security group within an existing AWS VPC.
Python examples follow.
Initialize project mkdir ec2-instance-demo cd ec2-instance-demo cdk init --language python Activate virtual env source .env/bin/activate Install dependencies pip install -r requirements.txt List stacks cdk ls ec2-instance Stack file ec2_instance_stack.py (located inside the ec2_instance directory)
Import the required packages aws_ec2
Define an existing VPC using id vpcID</description>
    </item>
    
    <item>
      <title>Apps, Stacks &amp; Constructs</title>
      <link>https://debugthis.dev/cdk/2020-24-06-cdk-apps-stacks-constructs/</link>
      <pubDate>Tue, 23 Jun 2020 16:57:46 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-24-06-cdk-apps-stacks-constructs/</guid>
      <description>Overview The diagram above illustrates a high level overview of the AWS CDK.
At its core is the App.
App An application written in TypeScript, JavaScript, Python, Java, or C# that uses the AWS CDK to define AWS infrastructure
It defines 1 or more Stacks
Stacks Unit of deployment within the AWS CDK
Any number of stacks can be defined for an App
Stacks contain Constructs
Constructs Represents a reusable AWS cloud resource .</description>
    </item>
    
    <item>
      <title>Creating a project in Python</title>
      <link>https://debugthis.dev/cdk/2020-24-06-aws-cdk-python-project/</link>
      <pubDate>Tue, 23 Jun 2020 16:57:46 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-24-06-aws-cdk-python-project/</guid>
      <description>Overview The AWS CDK supports the Python programming language to define cloud infrastructure on AWS.
These steps assume all the pre requisities are already setup.
Python version You will need Python version 3.6 or later installed.
Virtual environment AWS CDK encourages you to make use of Python virtual environments, so ensure this and pip are installed.
python -m ensurepip --upgrade python -m pip install --upgrade pip python -m pip install --upgrade virtualenv Initialize a project mkdir new-cdk-py-project &amp;amp;&amp;amp; cd new-cdk-py-project cdk init app --language python Source the virtual environment source .</description>
    </item>
    
    <item>
      <title>Installation</title>
      <link>https://debugthis.dev/cdk/2020-23-06-aws-cdk/</link>
      <pubDate>Tue, 23 Jun 2020 16:57:46 +0100</pubDate>
      
      <guid>https://debugthis.dev/cdk/2020-23-06-aws-cdk/</guid>
      <description>In this post I shall I demonstrate how to install the AWS CDK.
AWS CDK: Cloud infrastructure which is defined as code, using languages such as; TypeScript, JavaScript, Python, Java and C#.
Pre install Node.js version 10.3.0 or later AWS account setup, but do not use your root account, instead make a new one using IAM AWS CLI, using account configure to set your preferred region, access keyid, access secret key Installation Install the CDK using the following command.</description>
    </item>
    
    <item>
      <title>Pexels Photos Downloader</title>
      <link>https://debugthis.dev/go/2020-06-15-pexel-photos-downloader/</link>
      <pubDate>Thu, 18 Jun 2020 10:50:31 +0100</pubDate>
      
      <guid>https://debugthis.dev/go/2020-06-15-pexel-photos-downloader/</guid>
      <description>Introduction In this post I shall demonstrate how I wrote a HTTP client application in Go to download photos from Pexels.
Pexels provides a platform which allows you to download stock photos and videos for free.
I chose Go as the language to write in since I wanted to learn how to use it.
API In order to use the API you will need:
Account API Key Documentation: RESTful API</description>
    </item>
    
    <item>
      <title>Deploying a Hugo site hosted on Firebase using GitLab CI</title>
      <link>https://debugthis.dev/gitlab-automation/gitlab-ci-hugo-deployment/</link>
      <pubDate>Thu, 21 May 2020 11:22:58 +0100</pubDate>
      
      <guid>https://debugthis.dev/gitlab-automation/gitlab-ci-hugo-deployment/</guid>
      <description>Introduction In this post I shall demonstrate how to setup a pipeline to automate deployments of a static Hugo site.
Here is an overview of the overall CI process.
Step 1 make changes to the site code and test locally
test changes locally by using the command
hugo serve Step 2 Once all changes are reviewed and are approved
run hugo to update files to use the correct base url defined in your config.</description>
    </item>
    
    <item>
      <title>Running Ansible via a Bastion host (GCP)</title>
      <link>https://debugthis.dev/ansible/2020-05-05-bastion-host-configuration/</link>
      <pubDate>Tue, 05 May 2020 12:55:55 +0000</pubDate>
      
      <guid>https://debugthis.dev/ansible/2020-05-05-bastion-host-configuration/</guid>
      <description>Overview I had setup several Compute instances within the Google Cloud platform.
These instances were only accessible over their private IP addresses to minimize malicious access.
I wanted to install and configure these Compute instances using Ansible and I was faced with an immediate issue.
How would I or Ansible access these hosts ?
Bastion host Fortunately, the Google Cloud Platform documentation provides information on how to &amp;lsquo;Securely connect to VM instances&amp;rsquo;.</description>
    </item>
    
    <item>
      <title>Using Pulumi with Jenkins</title>
      <link>https://debugthis.dev/pulumi/2020-04-29-using-pulumi-with-jenkins/</link>
      <pubDate>Wed, 29 Apr 2020 17:35:17 +0000</pubDate>
      
      <guid>https://debugthis.dev/pulumi/2020-04-29-using-pulumi-with-jenkins/</guid>
      <description>This post will demonstrate how to setup Jenkins to install the necessary Pulumi Python dependencies, and show an example deployment pipeline to setup a couple of EC2 instances on AWS.
Here is an overview of the deployment process.
Note. As this is for demonstration purposes, the pipeline will also destroy the EC2 instances on AWS it initially created. This is executed via the stage: Pulumi Cleanup.
All the resources required can be found in the following Gitlab repository:</description>
    </item>
    
    <item>
      <title>Using Pulumi on AWS</title>
      <link>https://debugthis.dev/pulumi/2020-04-24-getting-started-with-pulumi-aws/</link>
      <pubDate>Fri, 24 Apr 2020 10:14:11 +0000</pubDate>
      
      <guid>https://debugthis.dev/pulumi/2020-04-24-getting-started-with-pulumi-aws/</guid>
      <description>In this post I shall how to use the examples provided by Pulumi to setup a static website using AWS S3.
Pre requisities These steps assume you have already cloned https://github.com/pulumi/examples.git 2. Python version &amp;gt;=3 3. AWS account Setup user in IAM Create a user in AWS which has access to create S3 resources Export the credentials.csv You will need the AWS_ACCESS_KEY_ID You will need the AWS_SECRET_ACCESS_KEY Note down the AWS region you want to deploy into Deployment configuration Export the following environment variables export AWS_ACCESS_KEY_ID=&amp;lt;your access key id&amp;gt; export AWS_SECRET_ACCESS_KEY=&amp;lt;your secret access key&amp;gt; Initialize the stack Make sure you are in the top level directory where you cloned the examples git repository</description>
    </item>
    
    <item>
      <title>Using Pulumi on GCP</title>
      <link>https://debugthis.dev/pulumi/2020-04-23-getting-started-with-pulumi-gcp/</link>
      <pubDate>Thu, 23 Apr 2020 13:27:50 +0000</pubDate>
      
      <guid>https://debugthis.dev/pulumi/2020-04-23-getting-started-with-pulumi-gcp/</guid>
      <description>If you haven&amp;rsquo;t already take a look at the previous post to get Pulumi up and running.
These steps are to get a better understanding of how to use Pulumi with Python and GCP.
I recommend creating a new project in GCP and using that project for demo purposes before going ahead with the rest of these steps.
That way you can simply delete the project later on.
I shall use a project called &amp;rsquo;throwaway-01&amp;rsquo; in GCP as an example.</description>
    </item>
    
    <item>
      <title>Getting started with Pulumi</title>
      <link>https://debugthis.dev/pulumi/2020-04-23-getting-started-with-pulumi/</link>
      <pubDate>Thu, 23 Apr 2020 10:53:11 +0000</pubDate>
      
      <guid>https://debugthis.dev/pulumi/2020-04-23-getting-started-with-pulumi/</guid>
      <description>Managing Infrastructure as code (IaC) is dominated by well known Terraform.
However, you are confined to use the more Ops driven HCL.
Where Pulumi differs is that it lets you use languages most people are already comfortable of using - Python, Go, JavaScript, TypeScript, and C#.
Since Pulumi supports these languages it makes using loops, functions and classes easier to integrate, and enforce best practices.
In essence fusing the roles which Devs and Ops play.</description>
    </item>
    
    <item>
      <title>Setting up Ansible AWX using a docker environment - Part 2 (the docker-compose approach)</title>
      <link>https://debugthis.dev/awx/2020-04-15-setting-up-ansible-awx-using-a-docker-environment-part-2-the-docker-compose-approach/</link>
      <pubDate>Wed, 15 Apr 2020 12:11:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-04-15-setting-up-ansible-awx-using-a-docker-environment-part-2-the-docker-compose-approach/</guid>
      <description>In a previous post I wrote about setting up Ansible AWX using a docker environment.
This used Ansible to read in an inventory file, dynamically generate scripts which stored environment variables as well as an overall docker-compose file.
In this post I shall include further details on the files generated using the Ansible approach of setting up AWX.
Then use docker-compose to create the AWX environment.
Files generated using Ansible When running Ansible to setup AWX.</description>
    </item>
    
    <item>
      <title>Visualizing Hacker News stories using Elasticsearch &amp; Kibana</title>
      <link>https://debugthis.dev/hacker-news/2020-04-10-visualizing-hacker-news-stories-using-elasticsearch-kibana/</link>
      <pubDate>Fri, 10 Apr 2020 10:43:26 +0000</pubDate>
      
      <guid>https://debugthis.dev/hacker-news/2020-04-10-visualizing-hacker-news-stories-using-elasticsearch-kibana/</guid>
      <description>Introduction Over at Hacker News items such as stories (but also comments, jobs, polls ) are added every second.
A story is represented by a title and a URL. A users karma (no. points) are roughly, the number of upvotes on their stories and comments minus the number of downvotes.
I wanted a better way of displaying the popularity of a Hacker News story.
I chose Elasticsearch to index the data as it&amp;rsquo;s dead easy to setup and use.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Importing Google Cloud Compute instances</title>
      <link>https://debugthis.dev/awx/2020-04-03-ansible-awx-importing-google-cloud-compute-instances/</link>
      <pubDate>Fri, 03 Apr 2020 17:05:34 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-04-03-ansible-awx-importing-google-cloud-compute-instances/</guid>
      <description>In a previous post I wrote about working with AWS EC2 instances in AWX. This post will cover how to work with Google Cloud Compute instances.
Overview Create/ use an existing user within the Google Cloud platform with access to create/modify Compute Engine instances Configure AWX to setup auto discovery of Compute instances in a Google Cloud project Setup Google cloud service account I chose to create a Service account as this type of account is used for an application (AWX) to make authorized API calls.</description>
    </item>
    
    <item>
      <title>Ansible AWX – Using Python to launch a Job template</title>
      <link>https://debugthis.dev/awx/2020-03-31-ansible-awx-using-python-to-launch-a-job-template/</link>
      <pubDate>Tue, 31 Mar 2020 14:33:31 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-31-ansible-awx-using-python-to-launch-a-job-template/</guid>
      <description>Using Python and the AWX RESTful API to launch a Job template.
Python code examples Authenticate using the OAuth2 token The AWX_OAUTH2_TOKEN is set inside the HTTP request header.
For more information regarding AWX OAUTH2 tokens see this page.
headers = {&amp;#34;User-agent&amp;#34;: &amp;#34;python-awx-client&amp;#34;, &amp;#34;Content-Type&amp;#34;: &amp;#34;application/json&amp;#34;,&amp;#34;Authorization&amp;#34;: &amp;#34;Bearer {}&amp;#34;.format(AWX_OAUTH2_TOKEN)} Then the request can be made with the headers set above.
Query Job templates AWX_JOB_TEMPLATES_API is the /api/v2/job_templates endpoint.
Requesting list of Job templates.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Using Python to create bulk users</title>
      <link>https://debugthis.dev/awx/2020-03-31-ansible-awx-using-python-to-create-bulk-users/</link>
      <pubDate>Tue, 31 Mar 2020 13:00:34 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-31-ansible-awx-using-python-to-create-bulk-users/</guid>
      <description>Introduction AWX provides an interface to create users via the front-end, command line through the administer tools as well as through the RESTful API.
What if you need to create multiple users. Rather than doing this manually I&amp;rsquo;ve wrote a Python script which automates the process (making use of the API).
The script carries out the following:
reads in a configuration file which stores authentication information, plus path to a users csv file reads in a users.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Using extra variables</title>
      <link>https://debugthis.dev/awx/2020-03-25-ansible-awx-using-extra-variables/</link>
      <pubDate>Wed, 25 Mar 2020 12:17:59 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-25-ansible-awx-using-extra-variables/</guid>
      <description>Demonstration on how to use variables when launching a playbook in AWX.
Overview In some cases your playbook will require additional variables to run.
This post will detail the usage of extra variables in AWX.
Playbook The playbook below will create a new AWS EC2 instance.
The playbook will only run if the conditional variable create_ec2 is set to &amp;ldquo;true&amp;rdquo;.
- name: EC2 admin hosts: all tasks: - name: Create EC2 instance # start an instance with a public IP address ec2_instance: name: &amp;#34;{{ instance_name }}&amp;#34; key_name: &amp;#34;development-k1&amp;#34; vpc_subnet_id: &amp;#34;{{ vpc_subnet_id }}&amp;#34; instance_type: &amp;#34;{{ instance_type }}&amp;#34; security_group: &amp;#34;{{ security_group }}&amp;#34; region: &amp;#34;{{ region }}&amp;#34; network: assign_public_ip: true image_id: &amp;#34;{{ image_id }}&amp;#34; tags: Environment: Testing when: create_ec2|bool Template - extra variables The template configuration in AWX supports extra variables.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Importing AWS EC2 instances</title>
      <link>https://debugthis.dev/awx/2020-03-25-ansible-awx-aws-ec2-auto-discovery/</link>
      <pubDate>Wed, 25 Mar 2020 10:52:30 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-25-ansible-awx-aws-ec2-auto-discovery/</guid>
      <description>In this post I shall demonstrate how to configure AWX to auto discover AWS EC2 instances.
Overview Create/ use an existing user with access to the AWS service APIs Configure AWX to setup auto discovery of EC2 instances in a VPC In this demonstration I have Ansible AWX setup locally.
Prerequisite VPC setup with Public DNS enabled Existing running EC2 instances present in VPC Setup AWS API user Login to the AWS console Create a new user via IAM Access type = Programmatic access Policy = EC2 Full access (this user will require permissions to manage all EC2 instances) Save the credentials - access key id and secret access key Configure AWX Create AWS cloud credential This credential will authorize AWX to make the necessary API calls.</description>
    </item>
    
    <item>
      <title>Ansible AWX - GitLab integration</title>
      <link>https://debugthis.dev/awx/2020-03-10-ansible-awx-gitlab-integration/</link>
      <pubDate>Tue, 10 Mar 2020 10:41:54 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-10-ansible-awx-gitlab-integration/</guid>
      <description>AWX supports SCM integration.
This post will provide a guide on how to setup a connection to a Gitlab repository.
At the end a test will be carried out to ensure AWX is able to checkout from the repository.
Gitlab deploy token The deploy token approach was used here as it only provides read access to a Gitlab repository.
Create a deploy token Navigate to the Gitlab repository Head over to Settings from the left menu Click on CI/CD Then expand Deploy Tokens Fill in the name.</description>
    </item>
    
    <item>
      <title>Ansible AWX - RESTful API</title>
      <link>https://debugthis.dev/awx/2020-03-05-using-the-ansible-awx-restful-api/</link>
      <pubDate>Thu, 05 Mar 2020 20:01:17 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-05-using-the-ansible-awx-restful-api/</guid>
      <description>The Ansible AWX API comes with a variety of endpoints to work with AWX programmatically.
Endpoints All API endpoints can be found under the root:
&amp;lt;AWX_HOST&amp;gt;/api/&amp;lt;version&amp;gt;
A HTTP GET returns the list of endpoints (at the time of writing API v2 is the current version).
{ &amp;#34;ping&amp;#34;:&amp;#34;/api/v2/ping/&amp;#34;, &amp;#34;instances&amp;#34;:&amp;#34;/api/v2/instances/&amp;#34;, &amp;#34;instance_groups&amp;#34;:&amp;#34;/api/v2/instance_groups/&amp;#34;, &amp;#34;config&amp;#34;:&amp;#34;/api/v2/config/&amp;#34;, &amp;#34;settings&amp;#34;:&amp;#34;/api/v2/settings/&amp;#34;, &amp;#34;me&amp;#34;:&amp;#34;/api/v2/me/&amp;#34;, &amp;#34;dashboard&amp;#34;:&amp;#34;/api/v2/dashboard/&amp;#34;, &amp;#34;organizations&amp;#34;:&amp;#34;/api/v2/organizations/&amp;#34;, &amp;#34;users&amp;#34;:&amp;#34;/api/v2/users/&amp;#34;, &amp;#34;projects&amp;#34;:&amp;#34;/api/v2/projects/&amp;#34;, &amp;#34;project_updates&amp;#34;:&amp;#34;/api/v2/project_updates/&amp;#34;, &amp;#34;teams&amp;#34;:&amp;#34;/api/v2/teams/&amp;#34;, &amp;#34;credentials&amp;#34;:&amp;#34;/api/v2/credentials/&amp;#34;, &amp;#34;credential_types&amp;#34;:&amp;#34;/api/v2/credential_types/&amp;#34;, &amp;#34;credential_input_sources&amp;#34;:&amp;#34;/api/v2/credential_input_sources/&amp;#34;, &amp;#34;applications&amp;#34;:&amp;#34;/api/v2/applications/&amp;#34;, &amp;#34;tokens&amp;#34;:&amp;#34;/api/v2/tokens/&amp;#34;, &amp;#34;metrics&amp;#34;:&amp;#34;/api/v2/metrics/&amp;#34;, &amp;#34;inventory&amp;#34;:&amp;#34;/api/v2/inventories/&amp;#34;, &amp;#34;inventory_scripts&amp;#34;:&amp;#34;/api/v2/inventory_scripts/&amp;#34;, &amp;#34;inventory_sources&amp;#34;:&amp;#34;/api/v2/inventory_sources/&amp;#34;, &amp;#34;inventory_updates&amp;#34;:&amp;#34;/api/v2/inventory_updates/&amp;#34;, &amp;#34;groups&amp;#34;:&amp;#34;/api/v2/groups/&amp;#34;, &amp;#34;hosts&amp;#34;:&amp;#34;/api/v2/hosts/&amp;#34;, &amp;#34;job_templates&amp;#34;:&amp;#34;/api/v2/job_templates/&amp;#34;, &amp;#34;jobs&amp;#34;:&amp;#34;/api/v2/jobs/&amp;#34;, &amp;#34;job_events&amp;#34;:&amp;#34;/api/v2/job_events/&amp;#34;, &amp;#34;ad_hoc_commands&amp;#34;:&amp;#34;/api/v2/ad_hoc_commands/&amp;#34;, &amp;#34;system_job_templates&amp;#34;:&amp;#34;/api/v2/system_job_templates/&amp;#34;, &amp;#34;system_jobs&amp;#34;:&amp;#34;/api/v2/system_jobs/&amp;#34;, &amp;#34;schedules&amp;#34;:&amp;#34;/api/v2/schedules/&amp;#34;, &amp;#34;roles&amp;#34;:&amp;#34;/api/v2/roles/&amp;#34;, &amp;#34;notification_templates&amp;#34;:&amp;#34;/api/v2/notification_templates/&amp;#34;, &amp;#34;notifications&amp;#34;:&amp;#34;/api/v2/notifications/&amp;#34;, &amp;#34;labels&amp;#34;:&amp;#34;/api/v2/labels/&amp;#34;, &amp;#34;unified_job_templates&amp;#34;:&amp;#34;/api/v2/unified_job_templates/&amp;#34;, &amp;#34;unified_jobs&amp;#34;:&amp;#34;/api/v2/unified_jobs/&amp;#34;, &amp;#34;activity_stream&amp;#34;:&amp;#34;/api/v2/activity_stream/&amp;#34;, &amp;#34;workflow_job_templates&amp;#34;:&amp;#34;/api/v2/workflow_job_templates/&amp;#34;, &amp;#34;workflow_jobs&amp;#34;:&amp;#34;/api/v2/workflow_jobs/&amp;#34;, &amp;#34;workflow_approvals&amp;#34;:&amp;#34;/api/v2/workflow_approvals/&amp;#34;, &amp;#34;workflow_job_template_nodes&amp;#34;:&amp;#34;/api/v2/workflow_job_template_nodes/&amp;#34;, &amp;#34;workflow_job_nodes&amp;#34;:&amp;#34;/api/v2/workflow_job_nodes/&amp;#34; } For more information see: https://docs.</description>
    </item>
    
    <item>
      <title>Ansible AWX - OAuth2 Tokens</title>
      <link>https://debugthis.dev/awx/2020-03-02-ansible-awx-oauth2-tokens/</link>
      <pubDate>Mon, 02 Mar 2020 10:35:38 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-03-02-ansible-awx-oauth2-tokens/</guid>
      <description>Note. The following steps require admin level permissions.
Creating a new OAuth2 access token Login to the AWX container.
Then run the following command.
awx-manage create_oauth2_token --user ${userid} This command will generate a new token for a specified user.
Via the API The API accessible via http://&amp;lt;awx_server&amp;gt;/api/ can be also used to create an OAuth2 access token.
curl -H &amp;#34;Authorization: Bearer &amp;lt;existing oauth2 access token&amp;gt;&amp;#34;\ -H http://&amp;lt;awx_server&amp;gt;/api/&amp;lt;version&amp;gt;/tokens/ \ -H &amp;#34;Content-Type: Application/json&amp;#34; -d @payload.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Launching a Job Template</title>
      <link>https://debugthis.dev/awx/2020-02-28-ansible-awx-launching-a-job-template/</link>
      <pubDate>Fri, 28 Feb 2020 10:33:17 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-02-28-ansible-awx-launching-a-job-template/</guid>
      <description>There are 2 options to trigger a Job Template.
via the UI via the RESTful API Triggering a Job Template from the UI Navigate to the Job Template to run, then click on Launch Choose the template here Click on the launch icon Next, the console output will be displayed showing you the progress of the playbook. Triggering a Job Template from the RESTful API Ansible AWX provides a RESTful API to programmatically interact with the platform.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Creating Job Templates</title>
      <link>https://debugthis.dev/awx/2020-02-27-ansible-awx-creating-job-templates/</link>
      <pubDate>Thu, 27 Feb 2020 18:40:06 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-02-27-ansible-awx-creating-job-templates/</guid>
      <description>Ansible defines Job Templates as a &amp;ldquo;combination of an Ansible playbook and the set of parameters required to launch it.&amp;rdquo;
They are reusable and shareable, and can be helpful in running a job multiple times.
Prerequisites Setup new Projects
Creating a new Job Template Once you have created a Project, click on Templates.
Next, click on the green plus icon to begin adding a new Job template.
Fill in the required fields as shown in the example below.</description>
    </item>
    
    <item>
      <title>Ansible AWX - Creating Projects</title>
      <link>https://debugthis.dev/awx/2020-02-27-ansible-awx-creating-projects/</link>
      <pubDate>Thu, 27 Feb 2020 18:33:05 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-02-27-ansible-awx-creating-projects/</guid>
      <description>A collection of Ansible playbooks.
Creating a new Project Navigate to the Projects from the left menu Click on the green plus icon to add a new Project In the example below I have created a new Project for simply gathering machine statistics; disk usage, memory etc .
SCM - set to Manual so the playbook to run will be sourced locally from disk rather than pulled from an external source like Git.</description>
    </item>
    
    <item>
      <title>Ansible AWX Adding new target hosts</title>
      <link>https://debugthis.dev/awx/2020-02-26-ansible-awx-new-target-hosts/</link>
      <pubDate>Wed, 26 Feb 2020 20:44:22 +0000</pubDate>
      
      <guid>https://debugthis.dev/awx/2020-02-26-ansible-awx-new-target-hosts/</guid>
      <description>If you have followed the setup of Ansible AWX in this post, you should have Ansible AWX up and running.
This post will explain how to add new target host machines to run Ansible on.
Method 1 : Using the AWX webpage Login to the AWX frontend using your admin credentials Click on inventories from the left menu As part of the initial setup, there will be 1 inventory built in called: Demo Inventory</description>
    </item>
    
    <item>
      <title>A Jenkins pipeline for creating &amp; merging pull requests in Bitbucket</title>
      <link>https://debugthis.dev/bitbucket/2020-01-23-a-jenkins-pipeline-for-creating-merging-pull-requests-in-bitbucket/</link>
      <pubDate>Thu, 23 Jan 2020 15:07:54 +0000</pubDate>
      
      <guid>https://debugthis.dev/bitbucket/2020-01-23-a-jenkins-pipeline-for-creating-merging-pull-requests-in-bitbucket/</guid>
      <description>Following on from using Python to automate the creation of pull requests and merges.
Introduction This post will cover integrating the Python script into a Jenkins pipeline.
The different parts of the Jenkins pipeline in Groovy are shown below.
Credentials The Bitbucket username/password are stored within the Jenkins credentials. We shall use this to pass into the Groovy script via UsernamePasswordMultiBinding
Parameters Giving the user options to either create/merge a pull request.</description>
    </item>
    
    <item>
      <title>Using Python to automate the creation &amp; merging of pull requests in Bitbucket</title>
      <link>https://debugthis.dev/bitbucket/2020-01-23-automation-creating-merging-pull-requests-in-bitbucket/</link>
      <pubDate>Thu, 23 Jan 2020 10:30:10 +0000</pubDate>
      
      <guid>https://debugthis.dev/bitbucket/2020-01-23-automation-creating-merging-pull-requests-in-bitbucket/</guid>
      <description>This automation will use Python along with the Bitbucket (formerly known as Stash) API.
A summary of the Bitbucket services used For a full list see https://developer.atlassian.com/bitbucket/api/2/reference/resource/
Endpoint Method Purpose /rest/api/2.0/projects/repos/pull-requests/merge?version=0 POST Merge PR /rest/api/2.0/projects/repos/pull-requests POST Create PR /rest/api/2.0/projects/repos//pull-requests GER Fetch PRs Additional resources Since we are sending a JSON payload to create a PR. A sample JSON file is shown below.
Payload: Create PR { &amp;#34;title&amp;#34;: &amp;#34;Automated PR&amp;#34;, &amp;#34;description&amp;#34;: &amp;#34;PR generated from Jenkins.</description>
    </item>
    
    <item>
      <title>Deleting temporary workspace files within a Jenkins pipeline</title>
      <link>https://debugthis.dev/jenkins/2020-01-22-jenkins-pipeline-deleting-temporary-workspace-files/</link>
      <pubDate>Wed, 22 Jan 2020 09:38:01 +0000</pubDate>
      
      <guid>https://debugthis.dev/jenkins/2020-01-22-jenkins-pipeline-deleting-temporary-workspace-files/</guid>
      <description>Using deleteDir() from within a pipeline stage does not delete the @tmp, @script directories which get generated at run-time.
Over time, this leads to Jenkins pipelines taking up disk space across Jenkins slave nodes or worse case directly on the master; whether this is checked out code, built binaries etc..
To free up disk space, add the following to the post stage of a pipeline.
Here we are explicitly specifying the dir() with the workspace path to remove recursively.</description>
    </item>
    
    <item>
      <title>Using Python to scrape the Billboard Hot-100 playlist to generate a Spotify playlist</title>
      <link>https://debugthis.dev/spotify/2019-10-26-using-python-to-create-a-spotify-billboard-hot-100-playlist/</link>
      <pubDate>Sat, 26 Oct 2019 14:18:13 +0000</pubDate>
      
      <guid>https://debugthis.dev/spotify/2019-10-26-using-python-to-create-a-spotify-billboard-hot-100-playlist/</guid>
      <description>Introduction In this post I shall go over how I used Python to create a Spotify playlist using the tracks taken from the billboard.com Hot-100 chart and the Spotify Web API.
Overview At a high level a quick overview.
Here is a list of tools/technologies used:
Python Spotify Web API Beautiful Soup HTML parsing library Beautiful Soup This Python module allowed me to easily extract data out of the billboard.com Hot-100 webpage.</description>
    </item>
    
    <item>
      <title>Building a RESTful API using Python &amp; Flask-RESTPlus</title>
      <link>https://debugthis.dev/flask/2019-10-09-building-a-restful-api-using-flask-restplus/</link>
      <pubDate>Wed, 09 Oct 2019 11:43:55 +0000</pubDate>
      
      <guid>https://debugthis.dev/flask/2019-10-09-building-a-restful-api-using-flask-restplus/</guid>
      <description>Introduction Flask-RESTPlus is a great Python library to help quickly build well structured RESTful APIs. This post is a guide on how to build REST APIs as well as expose documentation using Swagger.
As an example I shall be using a TV Show Schedule web application using Python. The CRUD functions are used.
Create: Add new Show Read: Show All, Show Lookup, Schedule Latest Update: Show Update Delete: Show Delete Overview Installation Install flask-restplus using pip.</description>
    </item>
    
    <item>
      <title>Ansible with docker-compose - Error: Can&#39;t find a suitable configuration  file in this directory</title>
      <link>https://debugthis.dev/ansible/2019-10-02-ansible-with-docker-compose-cant-find-a-suitable-configuration-file-in-this-directory/</link>
      <pubDate>Wed, 02 Oct 2019 17:58:07 +0000</pubDate>
      
      <guid>https://debugthis.dev/ansible/2019-10-02-ansible-with-docker-compose-cant-find-a-suitable-configuration-file-in-this-directory/</guid>
      <description>I came across the following error when using docker-compose with Ansible for the first time.
&amp;#34;msg&amp;#34;: &amp;#34;Configuration error Can&amp;#39;t find a suitable configuration file in this directory or any parent. Are you in the right directory? Supported filenames: docker-compose.yml, docker-compose.yaml&amp;#34; The fix.
&amp;nbsp;Note Make sure you copy your docker-compose.yml across to the remote host before running docker-compose
- name: Copy compose source templates template: src: docker-compose.yml dest: &amp;#34;{{ docker_compose_dir }}&amp;#34; mode: 0600 # do rest of actions - name: Start the containers docker_compose: project_src: &amp;#34;{{ docker_compose_dir }}&amp;#34; register: docker_compose_start </description>
    </item>
    
    <item>
      <title>Verify SonarQube Code quality gate status, via a Jenkins declarative pipeline</title>
      <link>https://debugthis.dev/jenkins/2019-10-01-jenkins-pipeline-verify-sonarqube-code-quality-gate-check/</link>
      <pubDate>Tue, 01 Oct 2019 08:29:59 +0000</pubDate>
      
      <guid>https://debugthis.dev/jenkins/2019-10-01-jenkins-pipeline-verify-sonarqube-code-quality-gate-check/</guid>
      <description>This post will show how to return the status of a SonarQube code quality gate from a project.
Depending on the status returned you may want to fail the pipeline or continue.
The groovy uses curl along with a user token to call the SonarQube API which returns the quality gate of a specified SonarQube project.
sonar_status=`curl -s -u ${sonar_api_token}: &amp;lt;sonar_url&amp;gt;/api/qualitygates/project_status?projectKey=${sonar_project} | grep &amp;#39;{&amp;#39; | python -c &amp;#39;import json,sys;obj=json.load(sys.stdin);print obj[&amp;#34;&amp;#39;projectStatus&amp;#39;&amp;#34;][&amp;#34;&amp;#39;status&amp;#39;&amp;#34;];&amp;#39;` echo &amp;#34;SonarQube status = $sonar_status&amp;#34; Authentication You will need to provide a form of authentication.</description>
    </item>
    
    <item>
      <title>Running Ansible on remote hosts without Python</title>
      <link>https://debugthis.dev/ansible/2019-09-29-running-ansible-on-remote-hosts-without-python/</link>
      <pubDate>Sun, 29 Sep 2019 12:55:55 +0000</pubDate>
      
      <guid>https://debugthis.dev/ansible/2019-09-29-running-ansible-on-remote-hosts-without-python/</guid>
      <description>Ansible is easy to use on hosts with Python. However, what happens when the target host does not have Python installed.
raw module The raw module allows Ansible to execute low level SSH commands.
This module does not require Python on the remote system, much like the script module.
So for example here is a shell command to execute a command over SSH.
ssh $ssh_uid@$target_host &amp;#34;df -kh&amp;#34; Using the raw module of Ansible</description>
    </item>
    
    <item>
      <title>Ansible for mounting nfs shares</title>
      <link>https://debugthis.dev/ansible/2019-09-29-ansible-for-mounting-nfs-shares/</link>
      <pubDate>Sun, 29 Sep 2019 12:44:12 +0000</pubDate>
      
      <guid>https://debugthis.dev/ansible/2019-09-29-ansible-for-mounting-nfs-shares/</guid>
      <description>Using Ansible to locally mount nfs shares remotely served from a NAS.
A role called admin is setup to use for the following example.
Vars --- # vars file for admin vol_mount_ext1: /mnt/ext1 nas_host: diskstation df_tmp: /tmp/.df volume: volume1 mount_points: - /mnt/ext1 user: username group: groupname Tasks --- # tasks file for admin # setup mount directory if not exist - name: Get hostname command: hostname register: hostcheck - debug: msg=&amp;#34;Hostname = {{ hostcheck.</description>
    </item>
    
    <item>
      <title>Using Python &amp; Bing to change a desktop wallpaper on Linux KDE</title>
      <link>https://debugthis.dev/archived/wallpaper/2019-09-29-a-python-wallpaper-changer-for-linux-kde/</link>
      <pubDate>Sun, 29 Sep 2019 11:17:18 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/wallpaper/2019-09-29-a-python-wallpaper-changer-for-linux-kde/</guid>
      <description>A small script which downloads an image of the day from Bing and sets the wallpaper on a Linux KDE.
The script carries out the following.
Makes a request to Bing.com API (https://www.bing.com/HPImageArchive.aspx?format=js&amp;amp;idx=0&amp;amp;n=1) for the latest Image of the day
Saves image to disk
Connect to dbus and use the plasmashell dbus interface to set wallpaper
Bing returns a JSON object in the following format.
{ &amp;#34;images&amp;#34;:[ { &amp;#34;startdate&amp;#34;:&amp;#34;20190928&amp;#34;, &amp;#34;fullstartdate&amp;#34;:&amp;#34;201909282300&amp;#34;, &amp;#34;enddate&amp;#34;:&amp;#34;20190929&amp;#34;, &amp;#34;url&amp;#34;:&amp;#34;/th?</description>
    </item>
    
    <item>
      <title>A movie news aggregator using Python, Flask &amp; Bootstrap</title>
      <link>https://debugthis.dev/flask/2019-09-29-using-python-flask-bootstrap-to-develop-a-movie-news-aggregator/</link>
      <pubDate>Sun, 29 Sep 2019 10:04:28 +0000</pubDate>
      
      <guid>https://debugthis.dev/flask/2019-09-29-using-python-flask-bootstrap-to-develop-a-movie-news-aggregator/</guid>
      <description>Introduction I like using Python a lot, and I like reading about movie news too.
I developed my own RSS aggregator so that instead of visiting multiple sites daily to get my movie news I go to 1.
Sure, I could have used an existing RSS aggregator service, but I wanted to understand how I would develop one myself.
In this post I&amp;rsquo;ll go over the steps on how I made my own application.</description>
    </item>
    
    <item>
      <title>Salesforce deployment status from Jenkins</title>
      <link>https://debugthis.dev/salesforce/2019-09-09-check-salesforce-deployment-from-jenkins/</link>
      <pubDate>Mon, 09 Sep 2019 19:17:08 +0000</pubDate>
      
      <guid>https://debugthis.dev/salesforce/2019-09-09-check-salesforce-deployment-from-jenkins/</guid>
      <description>Introduction There was a scenario on a project I worked on, which used multiple Jenkins Pipelines to carry out build validation and deployments in Salesforce.
There were times when a deployment or validation became stuck on the Salesforce environment. Ultimately, Salesforce cannot run parallel validations / deployments; it incorporates a queue and will add a new job to the queue until the first job has completed. Thus, there was a requirement to check the deployment status in Salesforce from a Jenkins Pipeline; if a job was in progress or queued then the overall Jenkins Pipeline would fail.</description>
    </item>
    
    <item>
      <title>WPF Google Docs Application</title>
      <link>https://debugthis.dev/archived/2009-10-08-wpf-google-docs-application/</link>
      <pubDate>Thu, 08 Oct 2009 08:38:55 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-10-08-wpf-google-docs-application/</guid>
      <description>I have been developing a WPF application which allows a user to view/export and upload documents into their Google Docs account. Using the Google Docs .NET SDK, I was able to develop a WPF application from scratch. After the jump some code snippets and screen-shots of the application.
What is the Google Documents List API? The API allows a client to programmatically access data stored with the Google Documents account. The following functionality is provided by the API:</description>
    </item>
    
    <item>
      <title>A Silverlight Bing API Web Application: Server side (Part 2)</title>
      <link>https://debugthis.dev/archived/2009-09-11-a-silverlight-bing-api-web-application-server-side-part-2/</link>
      <pubDate>Fri, 11 Sep 2009 17:28:11 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-09-11-a-silverlight-bing-api-web-application-server-side-part-2/</guid>
      <description>This is part 2 of the A Silverlight Bing API Web Application walk-through.
Silverlight development What I found instantly using Silverlight was that development was identical to developing a typical web application using ASP.net. One core difference was that whereas ASP.net uses XHTML to design the UI of the web content, Silverlight utilizes Extensible Application Markup Language (XAML). The layout and structure of both languages are identical in terms of XML formatting.</description>
    </item>
    
    <item>
      <title>A Silverlight Bing API Web Application: Working with JSON (Part 1)</title>
      <link>https://debugthis.dev/archived/2009-09-10-silverlight-bing-api-json/</link>
      <pubDate>Thu, 10 Sep 2009 18:05:41 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-09-10-silverlight-bing-api-json/</guid>
      <description>icrosoft.com/en-us/library/dd251056.aspx). To be honest I have never used Silverlight to develop an application before (typically web applications: ASP.net and desktop client applications: Win-Forms). After viewing some of the Silverlight video tutorials I decided to dive straight into development. After viewing some of Microsoft Bing if you haven&amp;rsquo;t heard of it is Microsoft&amp;rsquo;s search engine site, in many ways similar to Google.
This is part 1 of a 2 part post about developing a Silverlight application using the Bing API.</description>
    </item>
    
    <item>
      <title>Cross-thread operation not valid</title>
      <link>https://debugthis.dev/archived/2009-08-21-cross-thread-operation-not-valid/</link>
      <pubDate>Fri, 21 Aug 2009 14:46:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-08-21-cross-thread-operation-not-valid/</guid>
      <description>As I&amp;rsquo;ve been doing a lot of programming with win-forms, c# and c++ and making my applications multi-threaded - I came across a problem of allowing a newly created thread change the UI. Below I will discuss how to do this in c#.
The error I came across was the following:
Exception
Cross-thread operation not valid: Control &amp;lsquo;control_name&amp;rsquo; accessed from a thread other than the thread it was created on.</description>
    </item>
    
    <item>
      <title>Multi-threaded architecture</title>
      <link>https://debugthis.dev/archived/2009-06-01-multi-threaded-architecture/</link>
      <pubDate>Mon, 01 Jun 2009 10:29:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-06-01-multi-threaded-architecture/</guid>
      <description>As I am looking to add elements of multi-threaded features to the RSS reader that I am developing, I have been researching into multi-threaded architectures and basically what concurrency is. After the jump I will explain in detail what multi-threaded is and how they can be incorporated into an application, and I hope if your a beginner like me when it comes to multi-threading, you will understand it better afterwards too.</description>
    </item>
    
    <item>
      <title>Connecting a C&#43;&#43; application to a SQL database</title>
      <link>https://debugthis.dev/archived/2009-02-02-connecting-a-c-application-to-a-sql-database/</link>
      <pubDate>Mon, 02 Feb 2009 10:43:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-02-02-connecting-a-c-application-to-a-sql-database/</guid>
      <description>Connecting to a SQL server database with either C# or VB is a straightforward process. However, the support for C++ is relatively very poor. In this post I will provide a walk-through to outline the steps necessary to interact with a SQL database using C++.
When you create a new application using the C# language in Visual Studio there is an option to add a new data source - allowing you to specify a database object.</description>
    </item>
    
    <item>
      <title>Requesting a file using Firefox, XMLHttpRequest object and ASP.net</title>
      <link>https://debugthis.dev/archived/2009-01-17-requesting-a-file-using-firefox-xmlhttprequest-object-and-asp-net/</link>
      <pubDate>Sat, 17 Jan 2009 15:44:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-17-requesting-a-file-using-firefox-xmlhttprequest-object-and-asp-net/</guid>
      <description>Here is a short sample of Javascript code using the XMLHttpRequest object to request a file from a ASP.net server.
Please note that the Javascript code relies you on using Firefox and running ASP.net development server on localhost.
Create a new website in Visual Studio. Create a new folder (call it whatever you want) in the project using the solution explorer in Visual Studio Navigate to the Default.aspx page. Once in the Default.</description>
    </item>
    
    <item>
      <title>Microsoft AJAX Control Toolkit</title>
      <link>https://debugthis.dev/archived/2009-01-15-microsoft-ajax-control-toolkit/</link>
      <pubDate>Thu, 15 Jan 2009 11:11:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-15-microsoft-ajax-control-toolkit/</guid>
      <description>In an earlier post I outlined the XMLHttpRequest object and showed you Java-Script code to implement the API released by the W3C. However, there are other ways to incorporate Ajax into a web-page or a web-application.
This post will concentrate on the Microsoft Ajax Toolkit, a free toolkit released by the Microsoft team making Ajax a plug and play process.
After the jump I will describe the many components available in the toolkit.</description>
    </item>
    
    <item>
      <title>XmlHttpRequest object</title>
      <link>https://debugthis.dev/archived/2009-01-15-xmlhttprequest-object/</link>
      <pubDate>Thu, 15 Jan 2009 10:39:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-15-xmlhttprequest-object/</guid>
      <description>Practically how do we incorporate Ajax into a web-page or web application.
According to the World Wide Consortium (W3C) working draft specification the XmlHttpRequest object is an Application Programming Interface (API) which “provides scripted client functionality for transferring data between a client and a server”. This means that it allows client based scripting languages such as JavaScript to transport Extensible Markup Language (XML) as well as additional text data between a web server and a client’s web browser.</description>
    </item>
    
    <item>
      <title>Argotic Syndication Framework in use</title>
      <link>https://debugthis.dev/archived/2009-01-08-argotic-syndication-framework-in-use/</link>
      <pubDate>Thu, 08 Jan 2009 12:49:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-08-argotic-syndication-framework-in-use/</guid>
      <description>Since the Argotic Syndication framework played a large part in my masters dissertation and will play a part in the desktop based RSS reader, I&amp;rsquo;ve devoted this post to it. As mentioned before you can find out more about the framework from here. Code snippets are shown in C#.
This section will cover the core set of features provided by the framework and incorporated into the RSS reader application. The fragments of code in C# shown only concentrate on RSS for the purpose of giving a reader an insight into the code used in the application.</description>
    </item>
    
    <item>
      <title>From web to desktop: DAY 1</title>
      <link>https://debugthis.dev/archived/2009-01-08-from-web-to-desktop-day-1/</link>
      <pubDate>Thu, 08 Jan 2009 11:28:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-08-from-web-to-desktop-day-1/</guid>
      <description>I am currently working on developing a desktop based RSS reader application. Instead of a web application environment (see the post below) I am planning on using the win-form environment. Again the primary language will be C#. If all goes to plan I&amp;rsquo;ll try to post some screen shots and so you too can keep track of my progress. Screen shots from day 1 can be seen after the jump.</description>
    </item>
    
    <item>
      <title>Postgraduate university projects</title>
      <link>https://debugthis.dev/archived/2009-01-06-postgraduate-university-projects/</link>
      <pubDate>Tue, 06 Jan 2009 11:50:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-06-postgraduate-university-projects/</guid>
      <description>Dissertation as part of postgraduate MSc in Advanced Computer Science.
Language used: C# and ASP.net with the Microsoft Ajax Toolkit, CSS, Javascript and XHTML Application:
Web based Ajax RSS reader similar to the Google Readerapplication but with added functionality. With added Podcast streaming and saving and Blogger editing integration and creation of RSS feeds from any web page.
More info about project: My Masters dissertation which spanned over 3 months involved again a software development project.</description>
    </item>
    
    <item>
      <title>Undergraduate university projects</title>
      <link>https://debugthis.dev/archived/2009-01-06-undergraduate-university-projects/</link>
      <pubDate>Tue, 06 Jan 2009 10:36:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/2009-01-06-undergraduate-university-projects/</guid>
      <description>Final year project as part of undergraduate BSc in Software Engineering.
Language used: C++
Application: Network News Reader NNTP
More info about project:
As part of my undergraduate degree in Software Engineering my final year project involved a software development of a win-forms style client (networking orientated) application which communicated with a remote server written in C++. The development process followed a waterfall model design. This included a list of requirements, design, implementation, testing and maintenance.</description>
    </item>
    
    <item>
      <title></title>
      <link>https://debugthis.dev/archived/about/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/archived/about/</guid>
      <description>$ Hello world!
My name is Rikesh.
I enjoy automation, scripting, deployments and DevOps.
By the way I hate doing things manually, so if I can I automate!
I&amp;rsquo;ve worked on a few side projects in my free time, here are a few of them.
TV Schedule web application
Tracks TV Shows I like to watch Movie news aggregator
An RSS feed aggregator which tracks latest movie news Hacker News data visualization</description>
    </item>
    
    <item>
      <title>Search Results</title>
      <link>https://debugthis.dev/search/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      
      <guid>https://debugthis.dev/search/</guid>
      <description>This file exists solely to respond to /search URL with the related search layout template.
No content shown here is rendered, all content is based in the template layouts/page/search.html
Setting a very low sitemap priority will tell search engines this is not important content.
This implementation uses Fusejs, jquery and mark.js
Initial setup Search depends on additional output content type of JSON in config.toml ``` [outputs] home = [&amp;ldquo;HTML&amp;rdquo;, &amp;ldquo;JSON&amp;rdquo;] ```</description>
    </item>
    
  </channel>
</rss>
