LXC NFS mount, Proxmox VE Helper-Scripts, and Jellyfin

Well, this has been a slow process getting here – I have tried to get the LXCs to share the NFS mount from the host NODE for a while. I did manage to bypass the problem using normal VMs in Proxmox (see post). You can see an early failure below from before I understood the formatting of fstab and NFS volume name schemes.

Getting the LXC to mount an NFS volume turned out to be quite simple (after countless hours of trial and error, lol). Let’s get to it!

Create the location for your share:

Then add the volume the correct way to your host NODEs fstab:

Next you need to configure your NAS/server with the IP address for the Proxmox host NODE which will access the directory being mounted. Since I am using a Synology NAS it looks like this:

You can confirm access on the host NODE by going to the directory:

Confirm permissions on the “synology” directory:

Now we need to take a pause on setting up the NFS volume to create our Jellyfin LXC and run our Proxmox VE Helper script.


Now I am not advocating away from manually installing anything, but for experience sake, these scripts on GitHub have to be investigated. If you go to the “ct” directory and scroll down you will find various scripts including Jellyfin. On the Proxmox NODE (the LXC host) we will do a quick grab of the raw file:

The jellyfin.sh script should contain the following:

#!/usr/bin/env bash
source <(curl -s https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/misc/build.func) # Copyright (c) 2021-2025 tteck # Author: tteck (tteckster) # License: MIT | https://github.com/community-scripts/ProxmoxVE/raw/main/LICENSE # Source: https://jellyfin.org/ APP="Jellyfin" var_tags="media" var_cpu="2" var_ram="2048" var_disk="8" var_os="ubuntu" var_version="22.04" var_unprivileged="1" header_info "$APP" variables color catch_errors function update_script() { header_info check_container_storage check_container_resources if [[ ! -d /usr/lib/jellyfin ]]; then msg_error "No ${APP} Installation Found!" exit fi msg_info "Updating ${APP} LXC" apt-get update &>/dev/null
apt-get -y upgrade &>/dev/null
apt-get -y –with-new-pkgs upgrade jellyfin jellyfin-server &>/dev/null
msg_ok “Updated ${APP} LXC”
exit
}

start
build_container
description

msg_ok “Completed Successfully!\n”
echo -e “${CREATING}${GN}${APP} setup has been successfully initialized!${CL}”
echo -e “${INFO}${YW} Access it using the following URL:${CL}”
echo -e “${TAB}${GATEWAY}${BGN}http://${IP}:8096${CL}”

These scripts are interactive and neat!

Follow through the menus and pick your desired settings as required. Then you will land at the start screen for the script (the install takes a few minutes):

After you successfully run the script you will notice it created a new LXC in Proxmox for you:

At this point we will need to finish configuring the LXC with the NFS volume we mounted on the host NODE. We will edit a configuration file for the LXC on the host to share the volume. Select the .CONF file that corresponds to your Jellyfin LXC number – mine is 120.

We need to add a line to the .CONF file for the volume on the host NODE. This will allow it to mount on the Jellyfin LXC. The line:

Reload the systemd manager configuration and reset the Jellyfin LXC:

You can now check on your LXC container to see if the NFS volume was shared from the host node:

or go straight there and check for the file directory of your NAS/server:

Click the plus by Folders:

Then you can select the appropriate folder and begin with all the fun of customizing Jellyfin’s UI, library configuration, scanning files, etc.


A good source of inspiration for this configuration came from the Proxmox forums. But I found there was no real value in configuring additional user/group permissions in the LXC.

Additionally, this configuration method will work for other LXCs besides Jellyfin. Now to test and see how this configuration survives a migration across the CEPH cluster.


Posted

in

by

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *