HMDI | High Definition Multimedia Interface |
HDMI-ARC | Audio Return Channel |
HDMI-CEC | Consumer Electronics Control Andere Bezeichungen bei Herstellern: Philips -> EasyLink LG -> Simplelink Samsung -> Anynet+ Sony -> BRAVIA Sync |
Category: Homelab
[Docker] Usefull commands
Image Handling
docker image list | list downloaded images |
docker rmi image_name | delete image |
Administration
docker system df | show docker disk usage |
docker system prune | free space – remove stopped containers, images, cache |
systemctl restart docker.service | restarts the docker service (and all your container) |
ss -tulpn | check if docker containers listen to any port |
docker exec contaienr_id cat /etc/hosts or docker inspect -f ‘{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ contaienr_id | check container ip address |
Container Handling
docker ps | list running containers |
docker ps -a | list all the docker containers (running and stopped) |
docker stop container_id | stop container |
docker rm container_id | delete stopped container |
docker update –restart=unless-stopped container_id | make sure container re-start, unless manually stopped |
docker run -l debug container_id | run container with log |
docker logs -f container_id | display log |
docker exec -it container_id /bin/sh | open a shell in the running container |
docker commit container_id user/test_image | this command saves modified container state into a new image user/test_image |
docker run -ti –entrypoint=sh user/test_image | run with a different entrypoint |
docker run –volume-driver=nfs container_id | mount NFS share |
Docker Compose
docker-compose -f ~/docker/docker-compose.yml up -d | The -d option daemonizes it in the background |
docker-compose -f ~/docker/docker-compose.yml down | completely stop and remove containers, images, volumes, and networks (go back to how it was before running docker compose file) |
docker-compose -f ~/docker/docker-compose.yml pull | Pull latest images |
docker-compose logs container_id | check real-time logs |
docker-compose stop container_id | stops a running container |
docker-compose config | test your.env file which is used for variable substitution in the docker-compose.yaml |
[Proxmox] NFSv4 client saves files as “nobody” and “nogroup” on ZFS Share
I’m running a Proxmox Cluster with PVE1 and PVE2. On PVE2 a VM is running Debian Buster, which is mounting an zfs nfs share from PVE1. Inside the VM a script is running as root saving a backup on this nfs share. If I create a file locally (Test1) on PVE1, the owner is of course root. But since a few weeks the script running inside the VM is creating all files as nobody (Test2).
# ls -all /mnt/nfs/data
drwxr-xr-x 2 root root 4096 Jul 5 07:19 Test1
drwxr-xr-x 2 nobody nogroup 4096 Jul 5 07:21 Test2
This is because root users are mapped to different user id’s and group’s when changing files on an nfs share. But until now, this was no problom when enabling nfs on a dataset via
zfs set sharenfs=on zpool/data
because the no_root_squash was set by default. But it looks like this was a changed in ZFS on Linux 0.8.3 and the no_root_squash option isn’t set by default anymore. To enable it again use:
zfs set sharenfs='rw,no_root_squash' zpool/data
Another way is exporting the folder via /etc/exports and adding the no_root_squash option.
# sudo nano /etc/exports
/zpool/data/ *(rw,no_subtree_check,sync,insecure,no_root_squash)
Run sudo exportfs -a after editing the exports file to enable these changes immediately.
[Nextcloud] Moving my NC installation
About two years ago I installed Nextcloud via the NextcloudPi script in an LXC Debian Stretch Container on my Proxmox Host. Since last year there is a new Debian release called Buster and I wanted to upgrade my container. But somehow it was not possible… there was something broken and on every upgrade I tried, a swap error came up. I searched for hours, but couldn’t find any solutions to this error, so I had to move my whole Nextcloud installation to a new debian buster container. I took the chance to create the new container as unprivileged container. Since I had no experience moving a complete Nextcloud instance, I first read the NC Wiki and had a look at some tutorials. Finally I followed C. Riegers awesome guide on backing and restoring a Nextcloud instance.
Everything went well until step 9.
root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
Doctrine\DBAL\DBALException: Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [1698] Access denied for user 'ncadmin'@'localhost' in /var/www/nextcloud/lib/private/DB/Connection.php:64
As I’ve been restoring on a brand new LXC Buster container, of course a few things were missing. I restored my nextcloud database, but I also had to recreate the “ncadmin” dbuser and grant the right permissions. I looked up the ncadmin password in my nextcloud config.php and added the user.
mysql -u root -p
CREATE USER 'ncadmin'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES on nextcloud.* to ncadmin@localhost;
Next try with step 9.
root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
...nextcloud Redis server went away in /var/www/nextcloud/lib/private/Memcache/Redis.php:54
Still no success. Hiting google brought me to this link. C. Rieger was already there. 🙂
While checking /etc/redis/redis.conf
I noticed that in my nextcloud config.php there was a different path for redis.sock.
redis.conf
unixsocket /var/run/redis/redis-server.sock
config.php
'host' => '/var/run/redis/redis.sock',
After changing the path I rebooted the container and again tried step 9. Now with success and my Nextcloud instance was back online. I only had to add the new hostname to the trusted domains and could login again. The only thing I couldn’t get to work was the NextcloudPi functionality. Since I was only using the nextcloudpi auto upgrade scripts, I could live without that. I disabled and deinstalled the app from the user interface.
[Proxmox] Adding the pve-no-subscription repo
For receiving updates on Proxmox, you have add the pve-no-subscription
repo.
First, find the current pve-enterprise
repo:
nano /etc/apt/sources.list.d/pve-enterprise.list
Comment out the pve-enterprise
repo.
root@pve:~# cat /etc/apt/sources.list.d/pve-enterprise.list
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
To add the pve-no-subscription
repo, create a new file called pve-no-subscription.list
nano /etc/apt/sources.list.d/pve-no-subscription.list
and insert the repo:
root@pve:~# cat /etc/apt/sources.list.d/pve-no-subscription.list
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb https://download.proxmox.com/debian/pve buster pve-no-subscription
# security updates
deb https://security.debian.org/debian-security buster/updates main contrib
[Docker] Fefe über Container Technologien
https://blog.fefe.de/?ts=a0d07bd8
“Wisst ihr, was mir in dunklen Zeiten wie dieser Jahreszeit Erheiterung ins Leben bringt? Dieser Spirale zuzugucken:
- Unsere Software ist zu komplex, wir haben die Komplexität nicht im Griff! Pass auf, wir machen da ein verteiltes System daraus! Dann sind die Einzelteile weniger komplex. Vielleicht können wir das dann unter Kontrolle bringen.
- Das verteilte System braucht viel mehr administrativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Container! Docker!
- Docker-Aufsetzen braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Kubernetes!
- Kubernetes braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Ansible!
- Ansible braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Chef / Salt!
Frank hat im letzten Alternativlos das wunderbare Wort “Komplexitätsverstärker” eingeführt. Das ist genau, was hier passiert. Am Ende hast du ein Schönwettersystem. Wenn das erste Mal der Wind dreht, dann hast du einen Scherbenhaufen. Niemand kann diese ganze Komplexität mehr durchblicken.”
[WordPress] SyntaxHighlighter Ampersand character
Recently I noticed that the character & is displayed in the SyntaxHighlighter like this: &
To fix this, simply add this snippet of the user kaggdesign to /var/www/html/wp-content/plugins/syntaxhighlighter/syntaxhighlighter.php
/**
* Filter to fix issue with & in SyntaxHighlighter Evolved plugin.
*
* @param string $code Code to format.
* @param array $atts Attributes.
* @param string $tag Tag.
*
* @return string
*/
function kagg_syntaxhighlighter_precode( $code, $atts, $tag ) {
if ( 'code' === $tag ) {
$code = wp_specialchars_decode( $code );
}
return $code;
}
add_filter( 'syntaxhighlighter_precode', 'kagg_syntaxhighlighter_precode', 10, 3 );
This can be done directly from the webinterface. Just go to Plugins -> Plugin Editor -> select the Plugin SyntaxHighlighter Evolved -> add the snippet to the end
[Proxmox] Scrub cronjob
Default scrub cronjob when installing Proxmox on ZFS:
nocin@pve:~$ cat /etc/cron.d/zfsutils-linux
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
# Scrub the second Sunday of every month.
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub
[ZFS] Destroy snapshots
Snapshots in ZFS aren’t cumulative. They just include the difference between the filesystem at the time you took the snapshot and now.
Meaning if you have snapshots A, B and C, deleting A doesn’t impact the status of the remaining B and C. This is a common point of confusion when coming from other systems where you might have to consolidate snapshots to get to a consistent state.
This means, you can delete snapshots out of the middle of a list and not screw up snapshots before or after the one you deleted. So if you have:
pool/dataset@snap1
pool/dataset@snap2
pool/dataset@snap3
pool/dataset@snap4
pool/dataset@snap5
You can safely sudo zfs destroy pool/dataset@snap3
and 1, 2, 4, and 5 will all be perfectly fine afterwards.
You can estimate the amount of space reclaimed by deleting multiple snapshots by doing a dry run (-n) on zfs destroy
like this:
sudo zfs destroy -nv pool/dataset@snap4%snap8
would destroy pool/dataset@snap4
would destroy pool/dataset@snap5
would destroy pool/dataset@snap6
would destroy pool/dataset@snap7
would destroy pool/dataset@snap8
would reclaim 25.2G
List your snapshots (for a specific dataset simply use grep):
sudo zfs list -rt snapshot | grep pool/dataset
If you need to free some space, you can sort zfs snapshots by size:
zfs list -o name,used -s used -t snap