Yesterday evening I got an email that on my Proxmox server a disk has failed. In my ZFS Raidz1 I have 4 different drives of two manufactures: 2x HGST and 2x Seagate. In the last 7 years I also used some Western Digitals. The only faulty hard drives I had in this years were from Seagate. This was the third… So this morning I bought a new hard disk, this time a Western Digital Red, and replaced the failed disk.
SSH into my server and checked the zpool data. Because I already removed the failed disk, it’s marked as unavailable.
failed disk: wwn-0x5000c5009c14365b
Now I had to find the Id of my new disk. With fdisk -l, I found my new disk as /dev/sde, but there was no id listed.
sudo fdisk -l
To be sure I checked again with:
sudo lsblk -f
With disk by-id I now got the Id.
ls /dev/disk/by-id/ -l | grep sde
new disk: ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1CSDLRT and again the failed disk: wwn-0x5000c5009c14365b
Before replacing the disks, I did a short SMART test.
sudo smartctl -a /dev/sde
sudo smartctl -t short /dev/sde
sudo smartctl -a /dev/sde
The new disk had no errors. And because it is a new disk, I don’t had to wipe any file systems from it.
So first I took the failed disk offline. Not sure if that was necessary, but to be on the safe side…
sudo zpool offline data 2664887927330352988
Next run the replace command.
sudo zpool replace data /dev/disk/by-id/wwn-0x5000c5009c14365b-part2
/dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7K1CSDLRT
The resilver process for the 3TB disk took about 10 hours.
In February this year I built a tiny second Proxmox Host with the ASRock DeskMini A300 and the following parts. I chose an AMD Ryzen 5 3400G (with integrated APU) CPU. As HTPC I always used a RaspberryPi 3 running LibreElec (Kodi) with the Jellyfin for Kodi Plugin to access my media. But the the Raspberry reached its limits when it comes to 4k content or 10bit Audio lines. So why not use the DeskMini A300 as Proxmox Host and also as HTPC? It has enough power to play all types of media and even some steam games would run on it. So a few things had to be done.
Install a Desktop Environment & Login Manager on the Host
Add a user
Install some basic software (Firefox, VLC, JUK…)
Set up YouTube Leanback
Consuming Jellyfin media
Set up Plasma Activities for each service
Controlling media with KDE Connect
Of course it’s not recommended to install more than necessary on the host itself, so this shouldn’t be done on a productive Proxmox-System. Proxmox Wiki says: “Installing additional packages could lead to a hardly upgradeable system and is not supported from the Proxmox support team and therefore only for expert use.” Because I’m using my Proxmox Host just for my Homelab (pi-hole, nextcloud, reverseproxy etc.) I’ll take the risk. When using a host with dedicated graphics card, you could also create a VM and pass it through, so you don’t have to mess around on the host like I have to do.
As simple as always. Edit the visudo for root permissions and
adduser newusername
visudo
add the following line to the end
newusername ALL=(ALL:ALL) ALL
Afterwards you have to start the login manager
systemctl start lightdm
Now you’re ready to login.
3. Install software
As I took the plain KDE Plasma Desktop, there is nearly no other software besides the necessary programs for the DE. I installed just a few things on top:
In September 2019 YouTube announced to end Youtube Leanback TV (a web interface which could simple be opened in any browser via youtube.com/tv). But it still exists and can be used with a simple workaround I found on reddit. Simple install the Firefox Addon User Agent Switcher and add the following line in userAgent:
Mozilla/5.0 (SMART-TV; Linux; Tizen 4.0.0.2) AppleWebkit/605.1.15 (KHTML, like Gecko)
When browsing to youtube.com/tv you should get the Leanback interface in which you can easily navigate via keyboard. Now just press F11 to go in full screen mode.
Of course you can connect the YouTube App of your Smartphone and just cast videos to it just like with a Chromecast or the native YouTube Smart TV app. I would recommend using the Vanced App if you want to see less ads.
5. Jellyfin
I tried two ways consuming media of my Jellyfin server (which is running in an LXC on the same Host) and both work fine. First I used Kodi plus the Jellyfin for Kodi plugin. If you are already using Kodi for other stuff, integrating your Jellyfin content here is probably the best. As second option, and what I’m using still today, is simple the Jellyfin Web Version via browser in full screen mode. Just activate the TV modus in the Jellyfin settings. There are some minor bugs when navigating via keyboard but most of the time it runs perfect. But because Firefox is still not playing MKV files (see bug 1422891) I had to install Chromium for proper use of Jellyfin.
apt install chromium chromium-l10n
Just enter the full screen mode with F11 and it looks pretty well on your TV.
6. Plasma Activities
When using KDE Plasma you can simple create Activities (click here to see how to create an activity) for each of your full screen running application and easily switch between them. In my case I created three Acitivites, one for YouTube Leanback, one for Kodi and one for Jellyfin. A of course there is the StandardActivity, which is just my normal desktop for viewing other content like photos or playing a Steam game. This way I can switch through all my full screen applications via Super + Tab (or backwards with Super + Shift + Tab).
7. KDE Connect
If you dont wan’t to use the keyboard the whole time to control your media on your HTPC, you should try KDEConnect on your Smartphone. You’ll get the app from the F-Droid Store: KDE Connect. Next just install the application on your host with:
apt install kdeconnect
and pair the two devices. By default whenever media is played on your HTPC the app will now present you an interface to control it (with play, pause, next etc.). Also you are able to control the mouse via touch on your Smartphone. And there are some other functions you should check out as well.
I’m really enjoying this new setup. It’s much more powerful, flexible and easier to handle than my old RaspberryPi 3. I will keep an eye on whether there will be problems with a system update in the future.
I’m running a Proxmox Cluster with PVE1 and PVE2. On PVE2 a VM is running Debian Buster, which is mounting an zfs nfs share from PVE1. Inside the VM a script is running as root saving a backup on this nfs share. If I create a file locally (Test1) on PVE1, the owner is of course root. But since a few weeks the script running inside the VM is creating all files as nobody (Test2).
This is because root users are mapped to different user id’s and group’s when changing files on an nfs share. But until now, this was no problom when enabling nfs on a dataset via
zfs set sharenfs=on zpool/data
because the no_root_squash was set by default. But it looks like this was a changed in ZFS on Linux 0.8.3 and the no_root_squash option isn’t set by default anymore. To enable it again use:
zfs set sharenfs='rw,no_root_squash' zpool/data
Another way is exporting the folder via /etc/exports and adding the no_root_squash option.
root@pve:~# cat /etc/apt/sources.list.d/pve-no-subscription.list
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb https://download.proxmox.com/debian/pve buster pve-no-subscription
# security updates
deb https://security.debian.org/debian-security buster/updates main contrib
Recently I saw this tutorial about monitoring Nginx with Netdata and tried it by myself. I have running Netdata on my Proxmox Host and Nginx inside LXC. So I could skip step 1 and 2 of the tutorial. Since I’m using the super simple nginx-proxy-manager, which comes as docker deployment, it took me some minutes to figure out, how to enable the Nginx ‘stub_status‘ module (which is step 3 of the tutorial). Here’s what I did.
SSH into the LXC where the Nginx Docker is running. Look up the nginx container name (root_app_1) and open a shell in the running container.
docker ps
docker exec -it root_app_1 /bin/bash
Check if the ‘stub_module‘ is already enabled. The following command should return: with-https_stub_status_module I got it from here.
Next add a location to the nginx ‘server {}‘ block in the default config, to make it reachable via Netdata. The tutorial goes to ‘/etc/nginx/sites-available/default‘, another tutorial is editing ‘/etc/nginx/nginx.conf‘, but I found the default config in ‘/etc/nginx/conf.d/default.conf’.
nano /etc/nginx/conf.d/default.conf
If nano is not installed (bash: nano: command not found), just install it. Get more information here or here.
apt update
apt install nano -y
Insert the new location in the server { listen 80; …..} block. In my case I have running Netdata on my Proxmox host, so i added localhost and my Proxmox ip.
location /nginx_status {
stub_status;
allow 192.168.178.100; #only allow requests from pve
allow 127.0.0.1; #only allow requests from localhost
deny all; #deny all other hosts
}
Save, exit your docker container and restart it.
docker restart root_app_1
SSH into Proxmox and check with curl, if you able to reach the new nginx location.
For the last step Configure Netdata to Monitor Nginx (step 4) , just follow the Netdata Wiki. Place a new file called nginx.conf on your Netdata host.
nano /etc/netdata/python.d/nginx.conf
Because Netdata is not running local, use ‘remote‘ following the url, instead of local and localhost.
Both, Nextcloud and Collabora, are recommending the Docker installation for Collaboraoffice (here and here). But I wasn’t able to get the Collabora Docker Image running succesfully inside an Debian Buster LXC. There were appearing some errors and as far as I understand, it has something to do with running an LXC on ZFS. After spending about 3 hours I gave up and did a manual installation.
Installation
For a current installation guide, have look on their website here. Install https support for apt and add Collabora CODE repository. (CODE = Collabora Online Development Edition)
You have to edit three sections in the config: SSL handling, inserting your Nextcloud domain as WOPI client and add some credentials for webinterface. So open the config with:
nano /etc/loolwsd/loolwsd.xml
If you are using a reverse proxy (I have running a docker with nginx) which is managing all SSL certifactes, you don’t need local certifactes for your Collaboraoffice. So scroll down to the SSL settings, disable SSL and enable SSL termination.
<ssl desc="SSL settings">
<enable type="bool" desc="Controls whether SSL encryption is enable (do not disable for production deployment). If default is false, must first be compiled with SSL support to enable." default="true">false</enable>
<termination desc="Connection via proxy where loolwsd acts as working via https, but actually uses https." type="bool" default="true">true</termination>
2. Next add you Nextcloud domain in the WOPI storage section.
<storage desc="Backend storage">
<filesystem allow="false" />
<wopi desc="Allow/deny wopi storage. Mutually exclusive with webdav." allow="true">
<host desc="Regex pattern of hostname to allow or deny." allow="true">localhost</host>
<host desc="Regex pattern of hostname to allow or deny." allow="true">nextcloud\.domain\.org</host>
3. Add your credentials fot the webinterface.
<admin_console desc="Web admin console settings.">
<enable desc="Enable the admin console functionality" type="bool" default="true">true</enable>
<enable_pam desc="Enable admin user authentication with PAM" type="bool" default="false">false</enable_pam>
<username desc="The username of the admin console. Ignored if PAM is enabled.">user_name</username>
<password desc="The password of the admin console. Deprecated on most platforms. Instead, use PAM or loolconfig to set up a secure password.">super_secret_password</password>
Now restart loolwsd and check the status.
systemctl restart loolwsd.service
systemctl status loolwsd.service
Check if the https connection is working via browser https://ipaddress:9980 or curl:
Go to your reverse proxy, in my case I just use the nginx webui, and add another subdomain for collabora with an SSL certificate.
You also have to add a few custom locations. Look at the Collabora website for the some nginx configs. I used the second with “SSL terminates at the proxy”. I also added the given custom locations via the webui, e.g.:
You should now be able to reach Collabora through your new subdomain via https. https://collabora.your.domain.org/ And if you added /lool/adminws in your nginx config, you can also access the webui. https://collabora.your.domain.org/loleaflet/dist/admin/admin.html
Install & configure Collabora Online App in Nextcloud
The easiest part is to install the Collabora Online App. If done, go to Settings -> Collabora Online and set your Collabora Domain https://collabora.your.domain.org/ in here. Apply and edit your first excel in Nextcloud.