Alien Pastures

My thoughts on Slackware, life and everything

Slackware Cloud Server Series, Episode 11: Jukebox Audio Streaming

I am an avid music lover. My tastes are eclectic; I enjoy electronic, industrial, punk, new wave, reggae and dub but also baroque and classical music. I used to tape my own music cassettes when I was young, sharing my mixes with friends. I have hundreds of vinyl albums and at many more compact discs. But technology kept evolving and I switched to MP3 files that I could store on my computer and play using VideoLAN VLC for instance. But I also want to be able to just listen to my music in the living room without operating a laptop and for that, I setup a streaming server that acts as a jukebox, continuously picking random songs from my collection and playing them from a queue that never empties. In the living room I have a Denon AVR-X2300W which can pick up the network stream.
I have been running this audio streaming server for decades. First using OTTO, then Calliope and then coming back to OTTO after it had re-invented itself. But Calliope and OTTO are no longer maintained and  quite tricky to setup in the first place. I am not looking forward to migrate this unsupported setup to Slackware 15.1 when that gets released and I move my server to it.

I went on a search for a modern, maintained and open source alternative for my OTTO server.
I have actually setup Mopidy with Pibox extension to get the jukebox functionality. Recompiling Slackware’s gst-plugins-good package against libshout enables the libgstshout2.so library which gives us ‘shout2send‘ which then streams audio from Mopidy to my Icecast server. Setting it all up was not trivial and I did not like how the Pibox extension handled the queue autofill. I went on with my search for a good OTTO alternative and I hope I found it.

In this episode of Slackware Cloud Server I will show you how to stream your personal MP3 collection via Icecast using the open source platform AzuraCast. A worthy addition to your Slackware Cloud Server as as service to yourself, friends, family or even your local community.


Check out the list below which shows past, present and future episodes in my Slackware Cloud Server series. If the article has already been written you’ll be able to click on the subject.
The first episode also contains an introduction with some more detail about what you can expect.


Introduction


What is AzuraCast?

AzuraCast is a self-hosted, all-in-one web radio management suite consisting of multiple independent but co-operating components:

  • Liquidsoap
    This is the automation engine that fills your play queue, handles scheduling and song rotation, and feeds the stream source, re-encoding if needed.
  • Icecast-KH
    A maintained fork of Icecast that handles the actual audio streaming to listeners.
  • A PHP/Vue web application
    The management interface where you control everything: upload music, browse your library, configure playlists, handle listener requests, check analytics.
  • MariaDB server
    Stores the song metadata, play history and playlists.
  • Redis
    Runtime memory store for the session cache and queue state.

We will be running the whole stack in Docker, making it self-contained regardless of your Slackware version. The local directory tree containing your music library will be bind-mounted inside the Docker container.
We will setup the Apache HTTP server as a reverse proxy so that we can access the Management UI securely via HTTPS. We will also proxy the Icecast stream on a standard port so that listeners would not have to connect to your Icecast mount point via e.g. ‘http://yourserver:8000/radio.mp3’ but rather via a regular URL like ‘https://yourserver/yourchannel’.
Any player that speaks Icecast like VLC, mpv, foobar2000, mpc, also every browser and surely many more will be able to playback your music.

What makes AzuraCast the right solution?

When searching for a replacement I had several requirements in mind that a new program should meet. AzuraCast ticks more boxes than any of the other solutions I encountered and/or tested:

  • Should be able to handle tens of thousands of songs effortlessly
    Azuracast indexes my library in a MariaDB database. 50 000+ tracks is a documented use-case.
    I can tell you from experience that it takes a day or two to get 50K songs indexed however.
  • Continuously fills the play queue (it must never be empty)
    The Liquidsoap AutoDJ has configurable rotation and fills the queue automatically.
  • Manual song requests should be added to the head of queue
    AzuraCast has a ‘Listener Requests’ feature. It’s available via its web UI but also as a REST API.
    It should be possible to configure the AutoDJ in such a way that user requests are immediately placed at the head of the queue, unfortunately I have not yet found out how. Because the AutoDJ  pushes the tracks its queue to the Icecast server immediately, any user requests will be scheduled after that queue, not before it. I need to keep the queue length (which is configurable) to a minimum value of 2 to make the experience acceptable.
  • Web-based management interface
    AzuraCast comes with a full-featured, mobile-friendly web UI with lots of analytics, logging and debugging tools.
  • Auto-detect or manually re-scan for new music files
    It can do both: it runs an internal background task (with configurable interval) to scan for new music regularly, but there’s also a command-line option to re-scan your entire music collection.
  • Stream output via Icecast protocol
    Icecast-KH is the native output; every mount point is a standard Icecast stream.

Architecture overview


Preamble

This section describes the technical details of our setup, as well as the things which you should have prepared before trying to implement the instructions in this article.

Web Hosts

For the sake of this instruction, I will use the hostname “https://radio.darkstar.lan” as your landing page for AzuraCast.
The URL for the Icecast stream will be “https://radio.darkstar.lan/lowlands“.

Setting up your domain (which will hopefully be something else than “darkstar.lan”…) with new hostnames and then setting up web servers for the hostnames in that domain is an exercise left to the reader. Before continuing, please ensure that your equivalent for the following host has a web server running. It doesn’t have to serve any content yet but we will add some blocks of configuration to the VirtualHost definition during the steps outlined in the remainder of this article:

  • radio.darkstar.lan

Using a  Let’s Encrypt SSL certificate to provide encrypted connections (HTTPS) to your webserver is documented in an earlier blog article.

Note that I am talking about webserver “hosts” but in fact, all of these are just virtual webservers running on the same machine, at the same IP address, served by the same Apache httpd program, but with different DNS entries. There is no need at all for multiple computers when setting up your Slackware Cloud server.

Docker network

  • We assign a Docker network segment to our AzuraCast container: 172.24.0.0/24 and call it “azuracast.lan

File Locations

  • The Docker configuration goes into: /usr/local/docker-azuracast/
  • The data generated by the AzuraCast server goes into: /opt/dockerfiles/azuracast/

Port numbers

Since everything runs in a Docker container, all services listen at the localhost address 127.0.0.1.

  • AzuraCast Web UI in Docker: listens at TCP port 81
  • Icecast mount points: we will open two (for two audio streams) that listen at TCP ports 8001 and 8002

Station name

  • Our Icecast Radio Station will be called: Alien Pastures Radio
  • Inside Azuracast (primarily used to create the directory to mount the media) this name is trivially translated to: alien_pastures_radio

Installation

AzuraCast’s recommended installation method uses a helper script that downloads the Docker Compose configuration, fires the container, and allows for post-installation maintenance and management. We are not going to use that.
Still, in order for you to be able to switch effortlessly to the AzureCast officially suggested Docker setup, I will follow their recommendations to add all of the customization into a separate ‘override‘ file for Docker Compose.

Docker network

Create the network using the following command:

docker network create \
  --driver=bridge \
  --subnet=172.24.0.0/24 --gateway=172.24.0.1 \
  azuracast.lan

Docker’s gateway address in any network segment will always have the “1” number at the end.
Select a yet unused network range for this subnet. You can find out about the subnets which are already defined for Docker by running this command:

# ip route |grep -E '(docker|br-)'

The ‘azuracast.lan‘ network you created will be represented in the AzuraCastdocker-compose.yml file with the following code block:

networks:
  azuracast.lan:
    external: true
Directories

Create a directory structure for AzuraCast as a Docker container. We’ll change ownership of two of those directories: backups and storage. The UID/GID numbers I use (shown in bold green) must correspond to the values you define for AZURACAST_PUID and AZURACAST_PGID in the ‘.env‘ file you will create in the next step. If you omit that ‘chown‘ step, AzuraCast will not be able to save Station information nor will it be able to create backups of its SQL server.

# mkdir -p /opt/dockerfiles/azuracast/{backups,db_data,stations,storage}
# chown 1000:1000 /opt/dockerfiles/azuracast/{backups,stations}
# mkdir -p /usr/local/docker-azuracast
# cd /usr/local/docker-azuracast
Configuration files

Download an example Docker environment file and store it under the name ‘.env‘ from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/sample.env

Note that I download all files from the ‘stable’ branch of the AzuraCast git repository. You could also try the ‘main’ branch if you like to live on the edge.

The container ships its own internal Nginx proxy that also takes care of the SSL certificates, but I want to use my own host server’s Apache HTTP daemon to take care of the reverse-proxying. All you need to do to disable Nginx is to change the “AZURACAST_HTTPS_PORT” value from the default “433” to something else… and of course because port “443” is already in use on our server.
Likewise, we need to change “AZURACAST_HTTP_PORT” from the default value of “80” because that’s where our own Apache server is listening next to port 443.
After making the necessary changes we end up with an ‘.env‘ file containing (use your own values for the green example values if you want that):

# Make it easier to manage the project in Docker Compose:
COMPOSE_PROJECT_NAME=azuracast
# Define network ports:
AZURACAST_HTTP_PORT=81
AZURACAST_HTTPS_PORT=8412
AZURACAST_SFTP_PORT=2022
# We stick to the 'stable' channel instead of 'latest'
AZURACAST_VERSION="stable"
# If you start docker as your own user instead of root, change these to your own UID/GID
AZURACAST_PUID=1000
AZURACAST_PGID=1000

The ‘.env‘ file above is a configuration file which is read and used by the Docker daemon to setup the container. We are now going to create a second configuration file called ‘azuracast.env‘, containing the data that AzuraCast itself needs in order to function. Get the example file from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/azuracast.sample.env and tailor it to your needs.
In the end ‘azuracast.env‘ should look like this (lots of comments and default values removed):

APPLICATION_ENV=production
COMPOSER_PLUGIN_MODE=false
AUTO_ASSIGN_PORT_MIN=8000
AUTO_ASSIGN_PORT_MAX=8001
SHOW_DETAILED_ERRORS=false
MYSQL_PASSWORD=azur4c457
MYSQL_RANDOM_ROOT_PASSWORD=yes

The two values in bold green for the lower and upper range of the Icecast ports correspond with the values I defined earlier in the ‘Preamble‘ section. The range of two ports means that this setup supports two independent Icecast streams.

The docker-compose.yml file

Get the ‘docker-compose.yml‘ file from https://raw.githubusercontent.com/AzuraCast/AzuraCast/stable/docker-compose.sample.yml
After removing the sections I don’t need (they enable the official “docker.sh” script to perform maintenance and upgrades) the file it will look like this:

# If you need to customize this file, you can create a new file named:
# docker-compose.override.yml
# with any changes you need to make.
#
name: azuracast

services:
  web:
    container_name: azuracast
    image: "ghcr.io/azuracast/azuracast:${AZURACAST_VERSION:-latest}"
    # Want to customize the HTTP/S ports? Follow the instructions here:
    # https://www.azuracast.com/docs/administration/docker/#using-non-standard-ports
    ports:
      - '${AZURACAST_HTTP_PORT:-80}:${AZURACAST_HTTP_PORT:-80}'
      - '${AZURACAST_HTTPS_PORT:-443}:${AZURACAST_HTTPS_PORT:-443}'
      - '${AZURACAST_SFTP_PORT:-2022}:${AZURACAST_SFTP_PORT:-2022}'
      - '8000-8001:8000-8001'
    env_file:
      - azuracast.env
      - .env
    volumes:
      - station_data:/var/azuracast/stations
      - backups:/var/azuracast/backups
      - db_data:/var/lib/mysql
      - www_uploads:/var/azuracast/storage/uploads
      - shoutcast2_install:/var/azuracast/storage/shoutcast2
      - stereo_tool_install:/var/azuracast/storage/stereo_tool
      - rsas_install:/var/azuracast/storage/rsas
      - geolite_install:/var/azuracast/storage/geoip
      - sftpgo_data:/var/azuracast/storage/sftpgo
      - acme:/var/azuracast/storage/acme
    networks:
      - azuracast.lan
    restart: unless-stopped
    ulimits:
      nofile:
        soft: 65536
        hard: 65536
    logging:
      options:
        max-size: "1m"
        max-file: "5"

volumes:
  db_data: { }
  acme: { }
  shoutcast2_install: { }
  stereo_tool_install: { }
  rsas_install: { }
  geolite_install: { }
  sftpgo_data: { }
  station_data: { }
  www_uploads: { }
  backups: { }

networks:
  azuracast.lan:
    external: true

This docker-compose.yml file defines a number of Docker Volumes.  Don’t mind those, AzuraCast needs them but we don’t. We will mount a local directory containing our music library into AzuraCast container later and  make sure that the Station data is all written to a host directory as well.

Note:
If you want, you can use the downloaded original of ‘docker-compose.yml’ instead of my truncated version above. There’s almost no difference in execution or functionality (except for the custom network and the TCP port range it opens for the hundreds of potential streaming channels), but the original file is much too large to copy into this article..

You need to be aware that the downloaded official Docker Compose configuration file opens a bunch of TCP ports in the range of 8000 to 8500. That’s one TCP port per music stream you want to create, so the default configuration allows for a total of 500 different streams or ‘stations‘. I only run a few streams myself, and I used that in the above modification which opens only 2 ports.You may want to increase the number of possible streams on one AzuraCast instance. You can do that by editing the ‘docker-compose.yml‘ file, but since we’re going to use it anyway further down, I want to draw your attention to the option of creating a new file named ‘docker-compose.override.yml‘ in the same directory next to your docker-compose.yml and .env files.

I will show you how you can increase the Icecast listen ports from 2 to a total of 100 ports aka audio streams. You can modify the port range in this file to meet your needs, such as expanding the range to port 8500 instead of 8099.
Let’s also add an override  for the Station data storage. We want to use a local directory instead of a number of Docker volumes. To the ‘docker-compose.override.yml‘ file, add a ‘web‘ service just like in the actual Docker Compose file above, and then add a ‘ports‘ and a  ‘volumes‘ section so that the file looks like this:

services:
  web:
    ## OPTIONALLY: Add more ports, each port supports one radio station:
    #ports:
    #  - "8002-8099:8002-8099"
    # Store all Station data on the host:
    volumes:
      - /opt/dockerfiles/azuracast/stations:/var/azuracast/stations
      - /opt/dockeriles/azuracast/backups:/var/azuracast/backups
      - /opt/dockerfiles/azuracast/db_data:/var/lib/mysql
      - /opt/dockerfiles/azuracast/storage:/var/azuracast/storage

You will probably have noticed that the host directories (the paths to the left of the ‘:’ colon) are the same directories that we manually created in an earlier step.

Ready for lift-off!

We have not yet added any audio library to the Docker configuration. That is because we don’t have all the required information yet, and the missing piece needs to be arranged from within AzuraCast. So let’s start it!
When you start the container for the first time, it will take a few minutes because Docker will be downloading several hundred megabytes of container layers. Subsequent updates will be much faster:

# cd /usr/local/docker-azuracast
# docker-compose up -d
# docker-compose logs -f

AzuraCast is implemented as a single Docker image which contains all the functionality (streaming server, web-UI, database etc). Historical releases used separate Docker containers for the various components of the streaming platform. This single container implementation means for instance that the internal MariaDB database is not exposed to the host at all, therefore the password for the “azuracast” database user does not need to change from the default in the configuration file.
Still, AzuraCast automatically generates a random password for the MariaDB ‘root’ user upon the first database spin-up. This password will only be visible in the container’s logs upon that first startup so be sure to look there and write it down (look for the string “GENERATED ROOT PASSWORD“). Alternatively you can set “MYSQL_RANDOM_ROOT_PASSWORD=no” in the file ‘azuracast.env‘ and then add an extra line defining “MYSQL_ROOT_PASSWORD=your_secret_dbroot_password“.

Note:
Possibly the MYSQL_ROOT_PASSWORD variable needs to be called MARIADB_ROOT_PASSWORD

Initial web user-interface setup

We need to take some necessary steps to ensure that we can complete our Docker configuration by adding a media library.

Upon the very first startup, AzuraCast will present you its Management Interface, where we will set a Station Name for the streaming server. The Station Name is what AzuraCast uses as its internal directory. For example, if the station is “Alien Pastures Radio“, the directory used inside the container will typically be “alien_pastures_radio“.

Open a browser on your host computer (the Cloud Server) and navigate to http://localhost:81/. Since we have not yet configured a reverse proxy, there’s no other way than to perform these initialization steps on the host.
You’ll be presented with the AzuraCast setup wizard that allows to:

  1. Enter your email address to create your administrator account
    Choose a strong password; this is the master key to your station.
  2. Create your first Station
    Give it a memorable name (for instance “Alien Pastures Radio“), choose your streaming format (MP3 at 192 kbit/s is a reasonable starting point), and note the station’s “short name” that AzuraCast generates (which would be “alien_pastures_radio” in this example). This is the string you need for the ‘docker-compose.override.yml‘ later on.
  3. Set your time zone
    This is relevant for scheduled playlists and analytics.

After completing the steps in the wizard you end up on the main station management page (the screenshot was taken after I mounted my local music library, added that to a new playlist and connected the playlist to the Station).

Mount your existing music library

This is the most important configuration step. AzuraCast lives inside Docker, but your big MP3 collection lives on the host. You need to tell Docker to make that directory visible inside the container as a “bind mount”.

At this point we will use the station’s internal directory name which you wrote down when  creating your station through the web UI in the previous section. Using the station name, create an override file name ‘docker-compose.override.yml‘ (or rather, merge it into the file which you created in an earlier step):

services:
  web:
    volumes:
      - /your/actual/path/to/mp3s:/var/azuracast/stations/alien_pastures_radio/media/remote/mp3:ro

Replace `/your/actual/path/to/mp3s` with the real path on your host (e.g. `/data/music`) and `alien_pastures_radio` with your actual station directory name. You’ll notice the “:ro” at the end of the internal directory. This means, that your media library is going to be mounted read-only.

Apply the change:

# cd /usr/local/docker-azuracast ; docker compose down ; docker compose up -d

Note #1:
If your music is spread across multiple directories, you can add multiple volume entries, each mounted under a different subdirectory of the station’s media path. AzuraCast will index all of them.

Note #2:
If you have symlinks inside of your media directory, AzuraCast will choke on them because AzuraCast’s filesystem abstraction library (Flysystem) does not support them. What you can do instead is remove the symlink and create a bind-mount in its place

Note #3:
If you do not want AzuraCast to edit your media files, the recommended way is to mount your files one directory deeper than the media directory and bind-mount that instead.
As in the example I show above, you could mount the container’s internal directory ‘/var/azuracast/stations/alien_pastures_radio/media/remote/mp3′ as a read-only volume. That way AzuraCast can still use the media storage location to store cached metadata about the files, but your host filesystem can remain a read-only mount.
As for whether AzureCast will actually write to media files: only when a user edits the metadata via the web UI. Those changes are written back to the file to ensure they persist, since most users expect that to be the case when editing tracks in the media editor. If you mount the filesystem as read-only though, it’ll just quietly fail that process but will still save metadata changes to its database.

Configure the AutoDJ (auto-queue)

The AutoDJ is Liquidsoap, and its most important configuration is the playlist.
A playlist in AzuraCast is not a fixed list of songs — it is a source from which Liquidsoap draws tracks to fill the queue. The simplest configuration is a single playlist pointed at your entire library, set to shuffle ad infinitum.

Step 1: create a playlist for your Station

Navigate to ‘System Administration > Stations > [Your Station] > Manage > Playlists > Add Playlist‘:

  • Name: “My Library” (or whatever name you prefer)
  • Type: ‘Standard Playlist
  • Source: ‘Song-based
  • Playlist type: ‘General rotation
  • Song Playback Order: ‘Random
  • Weight: ‘1
  • Click ‘Save Changes

The “Song-based” source is key. It automatically includes every song in your media directory tree, with no manual maintenance required. You can keep adding music to your directory on the host, and once AzuraCast’s internal media scanner task executes, the new tracks become available to the AutoDJ.
We do of course still need to add media to this empty playlist. That’s what the next step will take care of.

Step 2: Connect your media to the Playlist you just created

Go to ‘System Administration > Stations > [Your Station] > Manage > Media > Music Files

  • Select the media directory/directories which you bind-mounted into your Docker container.
  • Click on ‘Playlists‘ and select ‘My Library‘ (or whatever name you gave it)
  • Click ‘Save‘.
Step 3: Enable the AutoDJ

Go to  ‘Stations > [Your Station] > Edit > AutoDJ‘:

  • AutoDJ Service: ‘Use LiquidSoap on this server’ is checked
  • Crossfade Method: ‘Smart Mode
  • Click ‘Save Changes’

Within a few seconds Liquidsoap will start filling the queue and the stream will begin playing.
You can come back here later and experiment with the Audio Processing section to improve the listener’s experience.

Enable listener requests

The listener request feature is how you can manually perform queue management in those cases that you want to override the AutoDJ.
When a request is submitted, AzuraCast inserts that track as the next-to-play item,  ideally at the head of the randomized play queue which is maintained by the AutoDJ (but how to place the request at the actual head  is the final puzzle piece I cannot yet figure out).

To enable it, go to ‘Stations > [Your Station] > Edit

  • Under the ‘Profile‘ section,  select ‘Enable Public Pages
  • Under the ‘Song Requests‘ section, check ‘Allow Song Requests
  • Optionally set ‘Minimum Time Between Requests‘ to prevent the queue from being flooded
  • Click ‘Save Changes

Requests can be submitted in two ways:

  1. Via the public web page:
    AzuraCast generates a public-facing player page at ‘http://radio.darkstar.lan/public/alien_pastures_radio‘. This includes a search box that lets anyone (or just you) find a song and click “Request“.
  2. Via the ‘REST API’ (useful for automation or in case you want to write your own front-end):
    Use this curl commandline to search for a song (replace STATION_ID with your Station’s numerical ID. Your first Station has an ID of “1“). You can also find your station’s numeric ID in the URL when you’re on the station’s dashboard page.
    Note that this will return a long list of all your audio files in JSON format!
$ curl -s "http://radio.darkstar.lan/api/station/STATION_ID/requests" | python3 -m json.tool

Submit a specific request (replace SONG_ID with the numeric ID from the search results):

$ curl -X POST "http://radio.darkstar.lan/api/station/1/request/SONG_ID"

The full API documentation is available at ‘http://radio.darkstar.lan/api‘.

The Icecast output URL

AzuraCast automatically creates an Icecast mount point when you create a station. By default it will be accessible at: http://127.0.0.1:8001/radio.mp3

To change the default mount point name “radio.mp3” into something else, go to ‘System Administration > Stations > [Your Station] > Manage > Broadcasting (in the left sidebar) > Mount Points‘ and change the name there. As an example, we change it to “lowlands“. The “.mp3” extension is not needed at all.
To verify, go to ‘System Administration > Stations > [Your Station] > Manage > Overview (in the left sidebar)‘. In the “Streams‘ section of the overview you’ll see the mount points that Liquidsoap is publishing to Icecast, along with the listener count and current playing track.

Put it to the test

To listen to your new streaming server from the command line, use any program that supports the Icecast protocol: mpv, mplayer, vlc, mpc (if you want to feed the Icecast stream back into an MPD instance) etc:

$ mpv http://127.0.0.1:8001/lowlands

It works! Time to make this stream available outside of your host server and let family and friends enjoy your shiny new music station.

Apache reverse proxy (https)

Especially if your server is headless, you definitely want to manage AzuraCast over HTTPS using a normal URL instead of the localhost address. You may also want to expose the audio stream on port 443 instead of 8001 so that it will pass any company firewall with ease. To achieve this we turn again to our trustworthy Apache HTTP server and setup a reverse proxy.

The flow is as follows: the user connects to the reverse proxy using HTTPS (encrypted connection) and the reverse proxy connects to the AzuraCast Docker container on the client’s behalf. Traffic between the reverse proxy (Apache httpd in our case) and the AzuraCast Docker container is un-encrypted and happens on the loopback address.
A reverse proxy is capable of handling many simultaneous connections and can be configured to offer SSL-encrypted connections to the remote users even when the backend can only communicate over clear-text un-encrypted connections.

Add the following reverse proxy lines to your VirtualHost definition of the “https://radio.darkstar.lan” web site configuration and restart httpd:

# ---
# Required modules:
# mod_proxy, mod_ssl, proxy_wstunnel, http2, headers, remoteip

# No caching
Header set Cache-Control "max-age=1, no-control"

# Proxy configuration
<Proxy *>
    Allow from all
    Require all granted
</Proxy>

ProxyRequests Off
ProxyVia on
ProxyAddHeaders On
ProxyPreserveHost On
ProxyTimeout 900

# SSL configuration
<IfModule mod_ssl.c>
    SSLProxyEngine on
    RequestHeader set X-Forwarded-Proto "https"
    RequestHeader set X-Forwarded-Port "443"
</IfModule>

# Allow access to everyone
<Location />
    Allow from all
    Require all granted
</Location>

# Letsencrypt places a file in this folder when updating/verifying certs.
# This line will tell apache to not to use the proxy for this folder:
ProxyPass /.well-known !

# Reverse proxy for the Web UI at http(s)://radio.darkstar.lan/
ProxyPass / http://127.0.0.1:81/
ProxyPassReverse / http://127.0.0.1:81/

# And the reverse proxy for the Icecast stream playing at http(s)://radio.darkstar.lan/lowlands
<Location /lowlands>
    ProxyPass http://127.0.0.1:8001/lowlands
    ProxyPassReverse http://127.0.0.1:8001/lowlands
</Location>

# AzuraCast requires a WebSocket proxy
RewriteEngine on
RewriteCond %{HTTP:Connection} Upgrade [NC]
RewriteCond %{HTTP:Upgrade} websocket [NC]
RewriteRule ^/?(.*) "ws://127.0.0.1:81/$1" [P,L]

# ---

If you want to make your non-encrypted web address http://radio.darkstar.lan redirect automatically to the encrypted ‘https://‘ variant, be sure to add this block to its VirtualHost definition to ensure that Letsencrypt can still access your server’s challenge file via an un-encrypted connection:

<If "%{REQUEST_URI} !~ m#/\.well-known/acme-challenge/#">
    Redirect permanent / https://radio.darkstar.lan/
</If>

The hostname and TCP port numbers shown in bold green are defined elsewhere in this article, they should stay matching when you decide to use a different hostname and port numbers.

Test and reload the Apache webserver configuration as follows:

# apachectl configtest
# apachectl graceful

MariaDB backups

The Web User-Interface of AzuraCast allows you to schedule regular backups of the SQL database which stores meta information about your Radio Station and media:
Go to ‘Administration > System Maintenance > Backups > Autiomatic Backups > Configure‘:

  • Check “Run Automatic Nightly Backups
  • Check “Exclude Media from Backup
  • Configure a time of the day, the archive format and the storage location (the default path in the dropdown is OK).
  • Click ‘Save Changes

Backup archives will be stored in the /opt/dockerfiles/azuracast/backups directory.
You can also manually backup the AzuraCast database and configuration with the following command:

# cd /usr/local/docker-azuracast
# docker compose exec web azuracast_cli backup --exclude-media /var/azuracast/backups/my-azuracast-backup.zip

This will create the backup ZIP file in your host’s local directory /opt/dockerfiles/azuracast/backups/.

Troubleshooting

The stream is silent / Liquidsoap is not playing

Check that your playlist has the AutoDJ enabled and that the mounted directory actually contains indexed files.

  • Change to the docker directory:
    # cd /usr/local/docker-azuracast
  • Check Liquidsoap logs:
    # docker compose logs | grep -i liquidsoap | tail -50
  • Verify the media directory is visible inside the container:
    # docker compose exec web ls -la /var/azuracast/stations/alien_pastures_radio/media/remote/mp3/

AzuraCast cannot read my MP3 files

File ownership matters. The AzuraCast Docker container runs as UID 1000 by default. If your host music files are owned by a different UID, make sure that the user account inside the container can read them.
Fix this by either:

  • Make files world-readable
    # chmod -R o+r /your/actual/path/to/mp3s
  • Or: change the container’s UID to match your host user.  In the ‘.env’ file that lives next to ‘docker-compose.yml’ you will find the below two lines. Change the UID number from 1000 to that of your own user on the host:
    AZURACAST_PUID=1000
    AZURACAST_PGID=1000

Then restart the Docker stack.

The web UI shows “0 files” after mounting

If you mounted the directory after the initial station creation but before a rescan, trigger a manual rescan:

# cd /usr/local/docker-azuracast
# docker compose exec web azuracast_cli azuracast:media:reprocess

The above command triggers a rescan for all Stations. If you want to trigger the rescan only for your own Station, run this instead:

# docker compose exec web azuracast_cli azuracast:media:reprocess alien_pastures_radio

You can also start the rescan process from the User Interface.
Go to ‘System Administration > Stations > [Your Station] > Manage > Media > Music Files‘:

  • Select “remote” or whatever root directory your media library shows
  • Click ‘More > Reprocess

Port 8001 is not reachable from outside

AzuraCast binds Icecast to `0.0.0.0:8001‘ on the host. If you have a host firewall, open the port to the outside world.

Here is an iptables example:
# iptables -A INPUT -p tcp --dport 8001 -j ACCEPT

But if you implemented the Apache reverse proxy as I outlined above, you would not have to expose this port at all. Instead you can rely on Apache httpd to relay user connections to the Icecast listen port on the host. The iptables firewall rule is then not needed of course.

Final thoughts

I found that AzuraCast does most of what my (t)rusty old OTTO did, and it is capable of considerably more that I am not even touching (but you might, if you are interested in running an actual live radio show with contributers).
The AutoDJ implements my primary need which is a maintenance-free jukebox: it handles the continuous queue filling without any intervention. The listener request system gives me an on-demand control over what plays next. My only gripe is that the AutoDJ pushes its own queue out to the player and any user request will be pushed out after that already pushed-out queue.  Which means that I need to keep the AutoDJ queue length limited to 1 or 2 songs so that I don’t have to wait too long for my own requested song to play.
The scheduled library scanning handles my ever-growing MP3 collection. And my music players just need to tun into a different Icecast URL.

If anyone is interested, I can describe in a future article how I deployed  YTuner locally to revive the network audio streaming capability for my Denon AVR-X2300W tuner after Denon killed its free online VTuner service by making it subscription-based. Because that Denon tuner is what’s playing my Icecast stream right now, while I am typing this.

I hope you enjoyed the article. Leave your thoughts in the comments section below.

Cheers, Eric

DAW question for Slackware 15.1: Pipewire or JACK Audio?

I bought a new laptop to do some more serious work on, and while I was setting up the audio, installing JACK Audio Server and friends, I wondered what would be the proper path forward for a Slackware DAW (Digital Audio Workstation) when Slackware 15.1 is released with all the improvements over the 15.0 stable release.
Should I stick with JACK or look into Pipewire to get rid of the Jack server? Jack can be a bit finicky at times on this new laptop. The audio hardware uses Sound Open Firmware, which is new to me and often I need to restart JACK once or twice via QJackCtl before I actually get sound out of the laptop.

To make an informed decision, let’s first examine what PipeWire actually aims to improve. I confess that because of what I learnt during  my research, the article is a bit biased but hey! Feel free to contribute to the discussion at the bottom of the page!

State of affairs

Linux audio has historically been a layered affair. ALSA talks to the hardware, and one (or more) sound servers sit on top:

  • PulseAudio: handles desktop/consumer audio mixing, Bluetooth, networked audio. As we all know, its drawback is a high latency. It’s definitely not “DAW-friendly”.
  • JACK: the professional audio server. It delivers ultra-low latency and a flexible routing between apps. But it comes with a steep learning curve and is not trivial to manage. It is rather hostile to consumer-grade applications that rely on Pulseaudio, unless you bridge them.
  • PipeWire: built as the solution to all of the above problems, aiming to replace both and also (importantly) acting as a drop-in for each.

This is how the audio stack on Linux looks from the point of past/present and future:


You see where I am going…

 Pros of switching to PipeWire for a DAW

  1. One server to rule them all
    The benefit that I see is the elimination of complexity. We would get rid of having to run two parallel audio servers. In the legacy setup, PulseAudio mostly handles browser audio, Bluetooth, and desktop sounds while JACK is running the DAW. Getting them to cooperate requires that you install a Pulse-JACK bridge (via the pulseaudio-jack package) which is a temperamental setup. PipeWire replaces both of those audio servers with a single daemon that implements all three protocols simultaneously via pipewire-pulse, pipewire-jack, and pipewire-alsa compatibility modules.
  2. JACK applications just work, unmodified
    PipeWire aims to offer capture and playback for both audio (and video, let’s not forget!) with minimal latency and support for PulseAudio, JACK, ALSA and GStreamer-based applications. This means that programs like Ardour, Carla, Hydrogen, Guitarix, and every other piece of software I added to my Slackware DAW project will connect to PipeWire’s JACK compatibility layer without recompilation. The ABI is compatible; programs don’t know they’re talking to PipeWire.
  3. Latency has matured significantly
    Early PipeWire had a reputation for higher latency than native JACK. That criticism is much less valid today. Already long before the 1.x releases, the latency was reduced as low as using just JACK on top of ALSA.  PipeWire supports the same realtime thread priorities and can be tuned with “quantum” (buffer size) settings comparable to JACK’s “period/frames” parameters.
  4. Video routing and screen capture come along for free
    The daemon can be configured to be both an audio server (with PulseAudio and JACK features) and a video capture server. This matters for screen-casting, OBS, webcams, and Wayland portal-based screen sharing. They all go through the same flow that handles your audio.
  5. Flatpak and containerised applications work properly
    Yeah, I use Flatpak. Particularly to have a Signal messaging app on my desktop. Multimedia is an issue there. But PipeWire uses a Polkit-like security model, asking Flatpak or Wayland for permission to record screen or audio, rather than relying on audio/video group membership.
    This is a solution for a genuine pain point with JACK plus PulseAudio in any sandboxed context.
  6. WirePlumber provides intelligent session management
    The PipeWire session manager WirePlumber (which replaced the older pipewire-media-session) automatically handles device routing decisions. This means for instance, you simply plug in a USB audio interface and It Just Works. With JACK you would have had to restart the server or run manual scripts.

Caveats (Especially for a DAW)

  1. Extreme latency requirements still favour native JACK
    For professional studio scenarios demanding the absolute floor of latency (sub-8ms round trips, preferably below 5ms even, dense plugin chains) the picture is more nuanced. Under JACK, you can isolate cores, prioritize IRQs, and force absolute determinism. Under PipeWire, your latency will jump if anything interrupts real-time threads. PipeWire is getting better, but native JACK under a PREEMPT_RT kernel remains the gold standard for this specific use case.
    For most home studio and ‘prosumer’ DAW setups, you will not notice the difference.
  2. Configuration requires a different mental model
    Well what’s new. I am still not fully comfortable using JACK and QJackCtl’s patchbay metaphor. PipeWire on the other hand uses WirePlumber for policy management. Quantum (buffer size) and sample rate tuning is done differently. There’s yet another learning curve.
  3. Slackware’s ‘pipewire-enable.sh’ script has a precondition you must not ignore
    An important operational point is the check whether JACK is installed and the daemon is running. If that’s the case you should opt in consciously and disable JACK server manually, not have it silently taken over by PipeWire.

Is the JACK Server Still Needed?

The short answer: no, not as a running daemon. But of course there are layers below that answer: what about the runtime dependency on JACK because all those audio applications were compiled against JACK?
When pipewire-jack is installed (my add-on to the pipewire package in Slackware), it provides a drop-in JACK compatibility library. Programs that link against libjack.so will, after PipeWire is enabled, actually be talking to PipeWire under the hood. The jackd process itself does not need to run.
What does that look like:

  • libjack.so is the library that applications link against. It is still needed because it provides the ABI that JACK-aware programs call into. PipeWire provides its own version of this library when pipewire-jack is installed.
  • jackd / jackdbus is the server binary. This is no longer needed as a running process once PipeWire takes over as the audio server.
  • qjackctl is the control GUI for JACK. Fortunately this can be replaced by qpwgraph (from the same author who created qjackctl). Qpwgraph is the PipeWire-native patchbay application, obviously inspired directly by qjackctl.

And therefore the practical implication: if you have JACK installed on your Slackware DAW system alongside PipeWire, you need to ensure the JACK server does not start at login while PipeWire does start. The “pipewire-enable.sh” script in Slackware handles the PipeWire side; you simply should not start the daemon, jackd.

Connecting it all – the patchbay

What are the considerations when moving from qjackctl to qpwgraph?
As I said earlier, the author of qjackctl also wrote qpwgraph and the similarities are obious. Qpwgraph is a Qt-based graph and patchbay for PipeWire, and it saves your wire sets. It is the natural successor to qjackctl as part of a  DAW project that uses PipeWIre. The interface is visually very similar to qjackctl’s patchbay view, but it shows PipeWire’s full graph: the audio ports, MIDI ports, and video streams; all together. The connections that you create are saved by WirePlumber and restored on the next login.
If you still prefer qjackctl, that is not an issue because it can also connect to PipeWire via the JACK emulation layer.Its JACK-specific transport controls and the JACK server start/stop buttons become meaningless however.

What’s in the future for Slackware DAW

Given all of the above, this is what I think:

If you run an amateur home studio or you are a musician whose priority is “everything just works with minimal fuss“, switching from JACK to PipeWire looks like the future. Run ‘pipewire-enable.sh‘ once, install qpwgraph (it’s already in my repository) and use it as your new patchbay.
After you switch to PipeWire, the “Pro Audio” profile in pavucontrol (yes, it is a Pulseaudio tool) used in combination with a PipeWire audio server provides raw, direct access to all available input/output channels on your USB audio interface, bypassing standard stereo mixer restrictions. It is ideal for professional interfaces with multiple inputs/outputs, supporting low-latency and enabling advanced routing for your DAW.

On the other side of the spectrum you have your ultra-low-latency professional use. It’s the use-case for a dedicated professional recording studio, where you would be looking at sub-5ms latency requirements, a PREEMPT_RT kernel, hardware with specific JACK-tuned drivers. In this scenario, the traditional JACK-only configuration remains the preferred setup of your DAW.

The good news is that PipeWire does not break the JACK ecosystem. It absorbs it. If you are comfortable having a JACK-based signal flow (Carla, Ardour, Guitarix chains), you can migrate to PipeWire transparently and your setups will continue working. The most obvious difference being that there is no more need to manually start a JACK server or maintain a bridge to PulseAudio.

Your ideas

My biased story tried to convince you that a switch from JACK to Pipewire in Slackware DAW may be the right thing to do, but I also write this to convince myself 🙂 I am very much a person of “don’t try to fix what is not broken” but I see the benefits for most of us, Slackware users.

Still, there is a hard choice to be made. Should I recompile all my audio applications in the absense of JACK package to get rid of that dependency altogether? Or should I keep JACK, and keep all packages dependent on its libraries, and leave it up to you, the user, to run “pipewire-enable.sh” and replace qjackctl with qpwgraph? The former is cleaner but the latter gives more options.
Let me know your thoughts in the comments section below!

Eric

Multiple Chromium updates in rapid succession

You may have noticed that after my release of the Slackware package for Chromium (and its Un-Googled sibling) version 146.0.7680.71 there was a really short interval until I released an update in the form of the 146.0.7680.75 version.

Chromium 146.0.7680.71 is the latest major version upgrade and it addressed a crazy amount of CVE’s. I am not even going to lookup and configure the URLs, here’s the list:

CVE-2026-3913 CVE-2026-3914 CVE-2026-3915 CVE-2026-3916 CVE-2026-3917 CVE-2026-3918 CVE-2026-3919 CVE-2026-3920 CVE-2026-3921 CVE-2026-3922 CVE-2026-3923 CVE-2026-3924 CVE-2026-3925 CVE-2026-3926 CVE-2026-3927 CVE-2026-3928 CVE-2026-3929 CVE-2026-3930 CVE-2026-3931 CVE-2026-3932 CVE-2026-3934 CVE-2026-3935 CVE-2026-3936 CVE-2026-3937 CVE-2026-3938 CVE-2026-3939 CVE-2026-3940 CVE-2026-3941 CVE-2026-3942

And then the next update to 146.0.7680.75 only two days later included 2 zero-day security fixes that the developers missed somehow: both for CVE-2026-3909 and CVE-2026-3910 an exploit exists in the wild.

And then already the next day, a new update emerged which I am currently compiling: Chromium 146.0.7680.80 sources were released to address yet another zero-day exploit, this time CVE-2026-3909.

Those new packages (chromium and chromium-ungoogled 64bit binaries) will become available tomorrow, Sunday if the compilation does not fail. The 32bit package for chromium-ungoogled will follow a day later. The last two attempts to build a 32bit package had to be aborted when I discovered that there was a new release. I have only one computer that is capable of compiling Chromium, and building packages in parallel is not an option.

Enjoy the weekend and be careful accessing shady web sites 🙂

Eric

KDE 6_26.03 for Slackware-current

Yesterday evening, KDE released their Frameworks 6.24.0 and taking advantage of the moment and even before morning coffee, I uploaded the Slackware packages for these new Frameworks.

Alongside with these Frameworks, I built a package for the third Beta of Krita 6.0.0 – the KDE drawing tool will finally be available as a Qt6 based application with proper Wayland support. Krita 5.2.15 is still Qt5 based and super stable, but I am replacing it with this  6.0.0.beta3 version because  it’s time to start testing the new software. The stable version 6.0.0 of Krita will hopefully become available in April.

Also I took the opportunity to put a new stamp on my ‘ktown’  KDE Plasma6 package repository: KDE-6_26.03.

You will find all these packages at the origin location: https://slackware.nl/alien-kde/current/testing/ together with an expansive README which will help you remove KDE Plasma5 from your Slackware-current computer and install the ‘ktown‘ version of KDE Plasma6 instead.

The version of KDE Gear (Applications and PIM) which will be released in April will carry the version 26.04 number and will mark the moment that all official KDE Applications have been ported to Qt6 and KDE Frameworks 6. I will use that moment to switch my ‘ktown’ repository from ‘testing’ to ‘latest’ to highlight the fact that Plasma6 Desktop Environment is ready for production use.

If you want to peek at the source code management, I track everything in a git repository. You will find the 6_26.03 branch at: https://git.slackware.nl/ktown/

Please try these new Plasma6 packages. Not per se to find application bugs; rather I want to fix all bugs related to my packaging. More rigorous testing means that Pat has to do less heavy lifting when he finally adopts ‘ktown’ into Slackware.
And as usual –  leave your feedback in the comments section below.

Cheers, Eric

« Older posts

© 2026 Alien Pastures

Theme by Anders NorenUp ↑