Automating a Home Media Library
DISCLAIMER: This work is for research and education purpose only. I do not encourage content piracy in any way whatsoever. I remind you that content piracy is illegal and a punishable offence under the laws of Mauritius.
TL; DR: A shared user per daemon and a shared group with a umask of 022
. Consistent paths between all containers which match on the inside and outside that appear as one file system for Sonarr, Radarr and Lidarr so hard links are possible and moves are atomic. And most of all, ignore most of the path suggestions from the Docker image documentation!
Introduction
The following article will not show specifics about the best Docker setup, but it describes an overview that one can use to make its own setup the best it can be. The idea is that each docker container is run as its own user, with a shared group and consistent volumes so every container sees the same path layout. This is easy to say, but difficult to understand and explain.
Single User and a Shared Group
Permissions
Ideally, each software runs as its own user and they’re all part of a shared group with folder permissions set to 775
(drwxrwxr-x
) and files set to 664
(-rw-rw-r--
), which is a umask of 002
. A sane alternative to this is a single shared user, which would use 775
and 664
which is a umask of 022
. You can restrict permissions even more by denying read from “other”, which would be a umask of 007
for a user per daemon or 077
for a single shared user. For a deeper explanation, try the Arch Linux wiki articles about File permissions and attributes and Umask.
Umask
Many docker images accept a -e UMASK=002
environment variable and some software inside can be configured with a user, group and umask or folder/file permission (Sonarr/Radarr). This will ensure that files and folders created by one can be read and written by the others. If you're using existing folders and files, you'll need to fix their current ownership and permissions too, but going forward they'll be correct because you set each software up right.
PUID and PGID
Many docker images also take a -e PUID=123
and -e PGID=321
that lets you change the UID/GID used inside to that of an account on the outside. If you ever peak in, you'll find that username is something like abc
or nobody
, but because it uses the UID/GID you pass in, on the outside it looks like the expected user. If you're using storage from another system via NFS or CIFS, it will make your life easier if that system also has matching users and group. Perhaps let one system pick the UID/GIDs, then re-use those on the other system, assuming they don't conflict.
Example
You run Sonarr using linuxserver/sonarr, you’ve created a sonarr
user with uid 123
and a shared group media
with gid 321
which the sonarr
user is a member of. You configure the Docker image to run with -e PUID=123 -e PGID=321 -e UMASK=002
. Sonarr also lets you configured user, group as well as folder and file permissions. The previous settings should negate these, but you could configure them if you wanted. Folders would be 775
, files 664
and the user/group are a little tricky because inside the container, they have a different name. Maybe abc
or nobody
. I'd leave all these blank unless you find you need them for some reason.
Single User and Optional Shared Group
Another popular and arguably easier option is a single, shared user. Perhaps even your user. It isn’t as secure and doesn’t follow best practices, but in the end it is easier to understand and implement. The UMASK for this is 022
which results in 755
(drwxr-xr-x
) for folders and 644
(-rw-r--r--
) for files. The group no longer really matters, so you'll probably just use the group named after the user. This does make it harder to share with other users, so you may still end up wanting a UMASK of 002
even w/ this setup.
Ownership and Permissions of /config
Don’t forget that your /config
volume will also need to have correct ownership and permissions, usually the daemon's user and that user's group like sonarr:sonarr
and a umask of 022
or 077
so only that user has access. In a single user setup, this would of course be the one user you've chosen.
Consistent and Well Planned Paths
If you’re wondering why hard links aren’t working or why a simple move is taking far longer than it should, this section explains it. The paths you use on the inside matter. Because of how Docker’s volumes work, passing in two volumes such as the commonly suggested /tv
and /downloads
makes them look like two file systems, even if they aren't. This means hard links won't work and instead of an instant move, a slower and more io intensive copy + delete is used. If you have multiple download clients, having a single /downloads
path means they'll be mixed up.
So pick one path layout and use it for all of them. I’m a fan of /data
, but there are other great names like /shared
, /media
or /dvr
. If this can be the same on the outside and inside, your setup will be simpler: one path to remember or if integrating docker and native software. But if not, that's fine too. For example, Synology might use /Volume1/data
and unRAID might use /mnt/user/data
on the outside, but /data
on the inside is fine.
It is also important to remember that you’ll need to setup or re-configure paths in the software running inside these Docker containers. If you change the paths for your download client, you’ll need to edit its settings to match. If you change your library path, you’ll need to change those settings in Sonarr, Radarr, Lidarr and/or Plex.
Examples
What matters here is the general structure, not the names. You are free to pick folder names that make sense to you. And there are other ways of arranging things too. For example, you’re not likely to download and run into conflicts of identical releases between movies and tv shows, so you could put both in /data/downloads/{movies|music|tv}
folders. Downloads don't even have to be sorted into sub-folders either, since movies, music and tv will rarely conflict. I have left it upon qBittorrent to create sub-folders based on the service that call it. Meaning, when Sonarr sends a download request to qBittorrent, it appends a tv-sonarr
category tag to the latter, then qBittorrent will create sub-folders based on the category tag.
This example data
folder has sub-folders for torrents and each of these have sub-folders for tv, movie and music downloads to keep things neat. The media
folder has nicely named TV
, Movies
and Music
sub-folders, this is your library and what you'd pass to Plex.
data
├── torrents
│ ├── movies
│ ├── music
│ └── tv
└── media
├── Movies
├── Music
└── TV
The path for each Docker container can be as specific as needed while still maintaining the correct structure:
Torrents
data
└── torrents
├── tv-sonarr
└── radarr
Torrents only needs access to torrent files, so pass it -v /host/data/torrents:/data/torrents
. In the torrent software settings, you'll need to reconfigure paths and you can sort into sub-folders like/data/torrents/{tv|movies|music}
.
Media Server
data
└── media
├── Movies
├── Music
└── TV
Plex only needs access to your media library, so pass -v /host/data/media:/data/media
, which can have any number of sub folders like Movies
, Kids Movies
, TV
, Documentary TV
and/or Music
as sub folders.
Sonarr, Radarr and Lidarr
data
├── downloads
│ ├── tv-sonarr
│ ├── radarr
└── media
├── Movies
├── Music
└── TV
Sonarr, Radarr and Lidarr get everything using -v /host/data:/data
because the download folder(s) and media folder will look like and be one file system. Hard links will work and moves will be atomic, instead of copy + delete.
Issues
There are a couple minor issues w/ not following the Docker image’s suggested paths.
The biggest is that volumes defined in the Dockerfile
will get created if they're not specified, this means they'll pile up as you delete and re-create the containers. If they end up w/ data in them, they can consume space unexpectedly and likely in an unsuitable place. You can find a cleanup command in the helpful commands section below. This could also be mitigated by passing in an empty folder for all the volumes you don't want to use, like /data/empty:/movies
and /data/empty:/downloads
. Maybe even put a file named DO NOT USE THIS FOLDER
inside, to remind yourself.
Another problem is that some images are pre-configured to use the documented volumes, so you’ll need to change settings in the software inside the Docker container. Thankfully, since configuration persists outside the container this is a one time issue. You might also pick a path like /data
or /media
which some images already define for a specific use. It shouldn't be a problem, but will be a little more confusing when combined w/ the previous issues. In the end, it is worth it for working hard links and fast moves. The consistency and simplicity are welcome side effects as well.
If you use the latest version of the abandoned RadarrSync to synchronize two Radarr instances, it depends on mapping the same path inside to a different path on the outside, for example /movies
for one instance would point at /data/media/Movies
and the other at /data/media/Movies 4k
. This breaks everything you've read above. There is no good solution, you either use the old version which isn't as good, do your mapping in a way that is ugly and breaks hard links or just don't use it at all.
Running Containers Using
Docker-compose
This is the best option for most users, it lets you control and configure many containers and their interdependence in one file. A good starting place is docker’s own Get started with Docker Compose. You can use composerize to convert docker run
commands into a single docker-compose.yml
file.
Below is a working example! The containers have PID, UID, UMASK and example paths defined to keep it simple.
---
version: "3"
services:
# Jackett for fetching/syncing RSS from torrent sites
jackett:
image: linuxserver/jackett
container_name: jackett
hostname: jackett
network_mode: host
volumes:
- ./data/config/jackett:/config
- empty-space:/downloads
environment:
- PUID=1000
- PGID=1000
- TZ=Indian/Mauritius
- RUN_OPTS=run options here #optional
ports:
- 9117:9117
restart: unless-stopped
# Plex Media Server
plex:
image: linuxserver/plex
container_name: plex
hostname: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
volumes:
- ./data/media:/data
- ./data/config/plex:/config
- empty-space:/tv
- empty-space:/movies
restart: unless-stopped
# Monitoring movies
radarr:
image: linuxserver/radarr
container_name: radarr
hostname: radarr
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Indian/Mauritius
- UMASK_SET=022 #optional
volumes:
- ./data:/data
- ./data/config/radarr:/config
- empty-space:/movies
- empty-space:/downloads
ports:
- 7878:7878
restart: unless-stopped
# Dashboard to manage all services #
# Optional
htpcmanager:
image: linuxserver/htpcmanager
container_name: htpcmanager
hostname: htpcmanager
network_mode: host
volumes:
- ./data/config/htpcmanager:/config
environment:
- PUID=1000
- PGID=1000
- TZ=Indian/Mauritius
ports:
- 8085:8085
restart: unless-stopped
# Torrent download client
qbittorrent:
image: linuxserver/qbittorrent
container_name: qbittorrent
hostname: qbittorrent
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Indian/Mauritius
- UMASK_SET=022
- WEBUI_PORT=8080
volumes:
- ./data/downloads:/data/downloads
- ./data/config/qbittorrent:/config
ports:
- 6881:6881
- 6881:6881/udp
- 8080:8080
restart: unless-stopped
# Monitoring TV Shows
sonarr:
image: linuxserver/sonarr
container_name: sonarr
hostname: sonarr
network_mode: host
environment:
- PUID=1000
- PGID=1000
- TZ=Indian/Mauritius
- UMASK_SET=022 #optional
volumes:
- ./data:/data
- ./data/config/sonarr:/config
- empty-space:/tv
- empty-space:/downloads
ports:
- 8989:8989
restart: unless-stopped
volumes:
empty-space:
driver: local
driver_opts:
type: none
device: "${PWD}/data/empty"
o: bind
Update all images and containers
docker-compose pulldocker-compose up -d
Update individual image and container
docker-compose pull NAMEdocker-compose up -d NAME
Docker Run
Like the Docker Compose example above, the following docker run
commands are stripped down to only the PUID, PGID, UMASK and volumes in order to act as an obvious example.
# sonarr
docker run -v ./data/config/sonarr:/config \
-v ./data/:/data \
-v ./data/empty:/tv
-v ./data/empty:/downloads \
-e PUID=1000 -e PGID=1000 -e UMASK=022 \
# qbittorrent
docker run -v ./data/config/qbittorrent:/config \
-v ./data/downloads:/data/downloads \
-e PUID=1000 -e PGID=1000 -e UMASK=022 \
# plex
docker run -v ./data/config/plex:/config \
-v ./data/media:/data/ \
-v ./data/empty:/tv \
-v ./data/empty:/movies \
-e PUID=1000 -e PGID=1000 -e UMASK=022 \
Systemd
I don’t run a full Docker setup, so I manage my few Docker containers with individual systemd service files. It standardizes control and makes dependencies simpler for both native and docker services. The generic example below can be adapted to any container by adjusting or adding the various values and options.
#/etc/systemd/system/thing.service
[Unit]
Description=Thing
Requires=docker.service
After=network.target docker.service
[Service]
ExecStart=/usr/bin/docker run --rm \
--name=thing \
-v /path/to/config/thing:/config \
-v /host/data:/data \
-e PUID=1000 -e PGID=1000 -e UMASK=022 \
nobody/thing
ExecStop=/usr/bin/docker stop -t 30 thing
[Install]
WantedBy=default.target
Config
Each service has its own config to be set up and it is really tedious, specifically for setting up Jackett with Sonarr/Radarr as it involves a lot of copy-paste actions. However, once it has been set up with various good torrent providers, everything works seamlessly. I will not dive deep into the set up of these services as this will make for a lengthy article.
In a brief summary, below are the main configs to set up:
- Add indexers in Sonarr/Radarr by adding Torznab feeds from Jackett
- Connect Sonarr/Radarr to qBittorrent
- When adding content in Sonarr/Radarr, add to folder
/data/media/TV
- Optionally, create and add Telegram bots to Sonarr/Radarr to get notifications
Conclusion
The docker-compose script above is the one I use and it’s pretty straightforward. I think it will work for anyone who uses it. Setting up multiple users for each containers is a more tedious work, however the end result is more secure. I decided to go with a single user and a shared group since I have a dedicated local machine on which I am running these containers.