Skip to content

docs: proxmoxve: add promoxve install doc#337

Closed
bitdriftr wants to merge 1 commit intoflatcar:mainfrom
enix:proxmoxve
Closed

docs: proxmoxve: add promoxve install doc#337
bitdriftr wants to merge 1 commit intoflatcar:mainfrom
enix:proxmoxve

Conversation

@bitdriftr
Copy link
Copy Markdown
Contributor

@bitdriftr bitdriftr commented Jun 4, 2024

This PR adds Promox VE install doc for upcoming promoxve OEM release.

It explains how to create a VM using the provided image, and how to configure it using either cloud init or Ignition.

@bitdriftr bitdriftr marked this pull request as draft June 4, 2024 15:37
```bash
# THIS IS A DEVELOPMENT BUILD
# TODO: update link with an alpha build
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will check to directly provide uncompressed image. The benefit is not that important:

$ du --si flatcar_production_proxmoxve_image.img
548M	flatcar_production_proxmoxve_image.img
$ du --si flatcar_production_proxmoxve_image.img.bz2
546M	flatcar_production_proxmoxve_image.img.bz2

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now done.

Suggested change
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img

Copy link
Copy Markdown
Contributor

@fhemberger fhemberger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Complete process for me:

curl -sS http://bincache.flatcar-linux.net/images/amd64/9999.9.100+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img

export VM_ID=123
qm create $VM_ID --cores 2 --memory 4096 --net0 "virtio,bridge=vmbr0" --ipconfig0 "ip=dhcp"

# On my Proxmox "out-of-the-box" installation, images are stored on `local-lvm` instead of `local`
qm disk import $VM_ID flatcar_production_proxmoxve_image.img local-lvm

qm set $VM_ID --scsi0 local-lvm:vm-$VM_ID-disk-0
qm set $VM_ID --boot order=scsi0

# Create the cloud-init CD-ROM drive which activates the cloud-init options for the VM.
qm set $VM_ID --ide2 local-lvm:cloudinit

# SKIPPED MANUAL STEP: paste iginition config into /var/lib/vz/snippets/user-data

qm set $VM_ID --cicustom "user=local:snippets/user-data"

```bash
# THIS IS A DEVELOPMENT BUILD
# TODO: update link with an alpha build
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wget | bzip2 -d > … pipes the output from wget itself to bzip, not the downloaded file (that would be wget -O - ), replacing wget with curl would be better.

In order to create a VM with our image we'll need to use the command line. Open a shell on the hypervisor, download the image and convert it to RAW :

```bash
qemu-img convert -f qcow2 -O raw flatcar_production_proxmoxve_image.img flatcar_production_proxmoxve_image.bin
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Converting the image is not required when you use the terminal. qm disk import can import the qcow2 file directly.

qm disk import $VM_ID flatcar_production_proxmoxve_image.bin local

# tell the vm to boot from the imported image
qm set $VM_ID --scsi0 local:$VM_ID/vm-$VM_ID-disk-0.raw
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Proxmox 8.2 uses a different notation:
qm set $VM_ID --scsi0 local-lvm:vm-$VM_ID-disk-0

export VM_ID=123

# create the vm and import the image to it's disk
qm create $VM_ID
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should include a network interface and probably some CPU/RAM settings:

qm create $VM_ID --cores 2 --memory 4096 --net0 "virtio,bridge=vmbr0" --ipconfig0 "ip=dhcp"


# create the vm and import the image to it's disk
qm create $VM_ID
qm disk import $VM_ID flatcar_production_proxmoxve_image.bin local
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

local-lvm supports disks, local seems to support only ISOs


## Configuring the VM with Openstack-style cloud-init config

Our VM can be booted as-is, however we might want to add a cloud-init configuration.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The VM I booted without config drive is not booting - as it stops at the initrd stage and drops into an emergency shell, because the coreos-metadata service fails.

Copy link
Copy Markdown
Contributor

@fhemberger fhemberger Oct 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed by:

# Create the cloud-init CD-ROM drive which activates the cloud-init options for the VM.
qm set $VM_ID --ide2 local-lvm:cloudinit

- Setting hostname (hostname is always $VM_ID)
- Writing SSH keys
- Writing network configuration

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A small note should be added: if username and password is set, it will not work.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rather a big note. I've been messing around with the settings a bit:

  • Setting username, password and SSH key via Proxmox-GUI, providing ignition.json only with "ignition" stanza: Doesn't work, Flatcar doesn't initialize.
  • Setting username, password and SSH key via Proxmox-GUI, setting same username (w/o pw/key) in ignition: User settings are not merged, no login posssible.
  • Setting only SSH key (leaving username/password), user core w/o SSH key in ignition.json: User settings are not merged, no login posssible.
  • Hostname is not VM name/ID, console still shows localhost, when no /etc/hostname is provided in ignition.json.

Not saying it should be supposed at this point, but this will defninitively be an issue people will run into. Best would be to point out to use ignition config only and not cloud init settings in the GUI.

]
}
]
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it s also worth to add a kernelArguments with autologin here, just to have a more friendly environment for testing this out.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also, the cloud-init config drive needs to be added in order for the snippet to work.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This worked for me with ignition:

export VM_ID=124

# create the vm and import the image to it's disk
qm create $VM_ID --cores 2 --memory 4096 --net0 "virtio,bridge=vmbr0" --ipconfig0 "ip=dhcp"
qm disk import $VM_ID flatcar_production_proxmoxve_image.bin local-lvm

# tell the vm to boot from the imported image
qm set $VM_ID --scsi0 local-lvm:vm-$VM_ID-disk-0
qm set $VM_ID --boot order=scsi0

qm set $VM_ID --ide2 local-lvm:cloudinit

qm set $VM_ID --cicustom "user=local:snippets/user-data"

qm start $VM_ID

@tormath1
Copy link
Copy Markdown
Collaborator

Hello @arcln the image is now available - are you still interested in continuing this PR? That'd be awesome for users to know how to deploy Flatcar on Proxmox.

```bash
# THIS IS A DEVELOPMENT BUILD
# TODO: update link with an alpha build
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
wget http://bincache.flatcar-linux.net/images/amd64/9999.0.102+kai-proxmox-support/flatcar_production_proxmoxve_image.img.bz2 | bzip2 -d > flatcar_production_proxmoxve_image.img
wget https://alpha.release.flatcar-linux.net/amd64-usr/current/flatcar_production_proxmoxve_image.img

@tormath1
Copy link
Copy Markdown
Collaborator

Hello @arcln the image is now available - are you still interested in continuing this PR? That'd be awesome for users to know how to deploy Flatcar on Proxmox.

Hello @arcln and best wishes for the new year. I am circling back to this PR, if you don't have spare cycles to work on this do you agree on letting the community or the maintainers taking over this PR and of course mentioning you as the initial author of the commit? Let us know :)

@tormath1
Copy link
Copy Markdown
Collaborator

This PR can be closed in favor of: #445 - thanks @arcln for the initial push and @fhemberger for taking over 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants