Skip to content
This repository was archived by the owner on May 12, 2021. It is now read-only.
This repository was archived by the owner on May 12, 2021. It is now read-only.

tooling for a customized kernel #77

@egernst

Description

@egernst

Background

Over last week I've gone through thought-process on a couple of different e2e use-cases that involved device-passthrough for 'interesting' container workloads. Examples include:

  • remote write to NVMe over fabric via RDMA (mlx NIC)
  • openCL application: run hash cat on NVIDIA GPU

In both cases, the setup includes loading a software stack on the system which will ultimately make use of out-of-tree kernel drivers, or at least using DKMS to apply the latest drivers they require. This works well if you "own" the host or virtual machine, but isn't very straight forward in our statically configured Kata kernel/image. I think Kata Containers provide a great benefit here (being able to have a guest kernel that has these features enabled, taking the burden away from the host system and or IaaS/VM provisioning). We need to support this well, though.

Feature request

I think we should have a way to be able to streamline, or at least document, the workflow of pulling particular DKMS / out of tree drivers and build them against the Kata kernel, and include and automatically load this from our kata-containers image.

Getting graphics card device pass-through and making use of an nvidia.ko should be an easy first example. I think?

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementImprovement to an existing featurefeatureNew functionality

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions