Skip to content

Giuliopime/gport

Repository files navigation

gport

this is my IaC for my personal projects

Hetzner cluster

I use Hetzner as cloud provider, I create a Kubernetes cluster using k3s hosted on non-dedicated servers.
this part is managed via terraform and the terraform-hcloud-kube-hetzner module.
it lives in the /hcloud-cluster folder.

usage

  1. set up terraform variables:
cp hcloud_cluster/terraform.tfvars.template hcloud_cluster/terraform.tfvars

then fill the file with your values, each variable has a comment explaining how to obtain it.

  1. follow kube-hetzner module installation instructions
  2. run terraform apply
terraform apply
  1. it will take a bit to create the cluster, once done you can get the kubeconfig with
terraform output -raw kubeconfig > ./kubeconfig 

what gets created

  • cloudflare records for kubernetes api and grafana dashboard
  • a control-plane node pool with 3 nodes (recommended server type at least cpx21 because 4GB of RAM are a minimum in most cases to handle the cluster well)
  • an agent node pool for lightweight applications and core kubernetes services (the nodes are called agent-sailboat)
  • an autoscaler agent node pool for general purpose applications (called agent-as-sailboat)
  • 2 Hetzner load balancers, one for the control plane and one for the agent nodes
  • all nodes use OpenSUSE MicroOS

kubernetes wise (installed directly via the kube-hetzner Terraform module):

  • calico as the CNI

  • nginx

  • longhorn for efficient and scalable storage management
    is used to have fast persistant storage for stuff like DBs.
    uses all the nodes nvme storage and manages them together giving you a simple StorageClass that you can use in your PVCs.

    will only use the storage of nodes with the label node.longhorn.io/create-default-disk=true
    the default StorageClass name is longhorn

  • kured for automatic kernel updates

  • cluster autoscaler (bless it)

  • smb support: in the future I wanna use Hetzner Storage Boxes for hosting immich and other stuff

Kubernetes resources

Kubernetes resources are managed using ArgoCD in the /k8s-resources folder.

secrets management

Secrets cannot be commited to git, for this reason we use sealed secrets

  1. Install kubeseal on your local machine:

    brew install kubeseal
  2. Follow installation instructions for sealed-secrets

    kubectl apply -k ./sealed-secrets-installation
  3. Prepare secrets:
    In each folder under /k8s-resources there can be a /secrets folder.
    Each contains a *-secret.template.yaml file, duplicate it and remove the .template part from the new file name. Then fill out the values.

  4. Seal the secrets:

    chmod +x scripts/seal-secrets.sh && scripts/seal-secrets.sh

    (Recommended) You can also provide a specific folder to the script, instead of sealing all secrets: scripts/seal-secrets.sh cert-manager

  5. Add argocd sync wave if necessary:

     annotations:
       argocd.argoproj.io/sync-wave: '-1' # make sure it gets created before anything else
  6. Commit and push the changes

devs note:
Argocd ignores subfolders when we choose a folder as a source.
Even if that's the case, I still decided to ignore secrets folders explicitly, 'cause you never know.

argocd

  1. install ArgoCD in the cluster:

    kubectl create namespace argocd
    kubectl apply -k ./argocd-installation --server-side

    and on your local machine:

    brew install argocd
  2. Configure two Nginx ingresses for HTTP/HTTPS and gRPC:

    kubectl apply -f ./argocd-installation/argocd-nginx-ingresses.yaml
  3. Login via the cli

    argocd admin initial-password -n argocd

    use username: admin and the password from the previous command to login

     argocd login grpc.argocd.giuliopime.dev

    then change the password and delete the old one

    argocd account update-password
    kubectl delete secret argocd-initial-admin-secret -n argocd
  4. Access the web UI argocd.giuliopime.dev using the credentials created at the previous step

  5. Apply the resources in the /k8s-resources folder:

    kubectl apply -f ./argocd/gport.yaml

If you have issues with using Helm with Kustomize, then you might have to patch argocd config maps to allow helm usage from kustomize:

kubectl patch configmap argocd-cm -n argocd --type merge \
  -p '{"data":{"kustomize.buildOptions":"--load-restrictor LoadRestrictionsNone --enable-helm"}}'

then restart argocd:

kubectl rollout restart deployment argocd-repo-server -n argocd

verify configs

kubectl get configmap argocd-cm -n argocd \
  -o jsonpath='{.data.kustomize\.buildOptions}'

adding a repository to ArgoCD

full guide

  1. create SSH key:
    ssh-keygen -t ed25519 -C "argocd-myrepo" -f myrepo-deploy-key -N ""
  2. go to your GitHub repository -> Settings -> Deploy keys -> Add deploy public key
  3. open ArgoCD repository settings and add the repository URL (example git@github.com:Giuliopime/gport.git) and the content of the private key file

suggested tools / resources

About

My IaC base to create the Kubernetes cluster I use for most of my projects

Topics

Resources

Stars

Watchers

Forks

Contributors