Homelab, Linux, JS & ABAP (~˘▾˘)~
 

[ABAP] Alpha conversion

DATA(lv_matnr) = VALUE matnr( 0000000001 ).
DATA(character_string) = VALUE string( ).

character_string = |Your Material Number is { lv_matnr ALPHA = IN }|.    "Adds leading zeros
character_string = |Your Material Number is { lv_matnr ALPHA = OUT }|.   "Removes leading zeros

[ABAP Env] Create Data Model & OData Service

Recently I worked through the tutorial on creating a travel bookings app in the SAP Cloud Platform ABAP Environment.

Find a good introduction and overview on this topic here: Getting Started with ABAP in the Cloud – Part I
And the travel bookings app tutorial here: Getting Started with ABAP in the Cloud – Part II

These are my notes on the steps needed to create the data model and publish it as oData service.

#LayerNomenclatureDescription
1Database TableZTABLEPlace your raw data first
2Data Definition (Interface View)ZI_Relation between different tables (e.g. currency or text table)
3Projection View (Consumption View)ZC_Configure the UI depending on your scenario.
Use different projection views for different usages of the same interface view and the same physical table.
4Service DefinitionZSD_Expose the projection view (and underlying associations like currency, country…) as service
5Service BindingZSB_How to we want to make the service available? Defines the binding type (OData V2 / OData V4)
Activate it with the “Activate” Button within the editor window.
Select the Entity and hit “Preview…” to see whtat we defined in our projection view.

If you’ve done this, you are able to view the data in a generated Fiori Elements app. But if you also want to create, edit, delete data, you’ll have to add some behavior functionality.

6Behavior Definition on Data DefinitionZI_Created on top of the Data Definition. Will get the same name es the Data Definition.
Implementation Type: Managed
Defines the operations create, delete, edit.
7Behavior Implementation on Definition ViewZBP_I_The code for the behavior… For the travel app tutorial, some logic for a generated unique key and field validation.
The class inherits from cl_abap_behavior_handler.
8Behavior Definition on Projection ViewZC_Created on top of the Projection View. Will get the same name es the Projection View.
Defines the operations create, delete, edit.

[Docker] Usefull commands

Image Handling

docker image listlist downloaded images
docker rmi image_namedelete image

Administration

docker system dfshow docker disk usage
docker system prunefree space – remove stopped containers, images, cache
systemctl restart docker.servicerestarts the docker service (and all your container)
ss -tulpncheck if docker containers listen to any port
docker exec contaienr_id cat /etc/hosts
or
docker inspect -f ‘{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ contaienr_id
check container ip address

Container Handling

docker pslist running containers
docker ps -alist all the docker containers (running and stopped)
docker stop container_idstop container
docker rm container_iddelete stopped container
docker update –restart=unless-stopped container_idmake sure container re-start, unless manually stopped
docker run -l debug container_idrun container with log
docker logs -f container_iddisplay log
docker exec -it container_id /bin/shopen a shell in the running container
docker commit container_id user/test_imagethis command saves modified container state into a new image user/test_image
docker run -ti –entrypoint=sh user/test_imagerun with a different entrypoint
docker run –volume-driver=nfs container_idmount NFS share

Docker Compose

docker-compose -f ~/docker/docker-compose.yml up -d The -d option daemonizes it in the background
docker-compose -f ~/docker/docker-compose.yml down completely stop and remove containers, images, volumes, and networks (go back to how it was before running docker compose file)
docker-compose -f ~/docker/docker-compose.yml pullPull latest images
docker-compose logs container_id check real-time logs
docker-compose stop container_id stops a running container
docker-compose configtest your.env file which is used for variable substitution in the docker-compose.yaml

[Shell] User and Group management & File permissions

  • User and Group management
    • id
    • useradd
      • -c – Full name
      • -e – Expiration date
      • -s – Default shell
      • -d – Home directory
    • passwd
    • usermod
      • -l – rename
      • -L – Lock
      • -U – unlock
    • userdel
      • -r – remove user data
    • groupadd
    • groupmod
    • gpasswd [-a -d -A] [user1, user2] [group]
    • newgrp [group]
  • su vs. su – vs. sudo
    • visudo
  • File permissions
    • UGO – User, Group, Other
    • RWX – Read, Write, Execute
    • chmod -R g+x (grant recursive execute permission to group)
      • r = 4
      • w = 2
      • x = 1
      • = 0
      • rwxrwxrwx = 777
      • rw-rw-rw- = 666
      • rwxrwxr–- = 774
      • rw-rw—- = 660
      • rw-r—–- = 640
    • chown
    • chgrp
    • umask

https://www.sluug.org/resources/presentations/2020/2020-02-12_permissions.pdf

[Proxmox] NFSv4 client saves files as “nobody” and “nogroup” on ZFS Share

I’m running a Proxmox Cluster with PVE1 and PVE2. On PVE2 a VM is running Debian Buster, which is mounting an zfs nfs share from PVE1. Inside the VM a script is running as root saving a backup on this nfs share. If I create a file locally (Test1) on PVE1, the owner is of course root. But since a few weeks the script running inside the VM is creating all files as nobody (Test2).

# ls -all /mnt/nfs/data
drwxr-xr-x  2 root  root       4096 Jul  5 07:19 Test1
drwxr-xr-x  2 nobody nogroup   4096 Jul  5 07:21 Test2

This is because root users are mapped to different user id’s and group’s when changing files on an nfs share. But until now, this was no problom when enabling nfs on a dataset via

zfs set sharenfs=on zpool/data

because the no_root_squash was set by default. But it looks like this was a changed in ZFS on Linux 0.8.3 and the no_root_squash option isn’t set by default anymore. To enable it again use:

zfs set sharenfs='rw,no_root_squash' zpool/data

Another way is exporting the folder via /etc/exports and adding the no_root_squash option.

# sudo nano /etc/exports
/zpool/data/ *(rw,no_subtree_check,sync,insecure,no_root_squash)

Run sudo exportfs -a after editing the exports file to enable these changes immediately.

[Nextcloud] Moving my NC installation

About two years ago I installed Nextcloud via the NextcloudPi script in an LXC Debian Stretch Container on my Proxmox Host. Since last year there is a new Debian release called Buster and I wanted to upgrade my container. But somehow it was not possible… there was something broken and on every upgrade I tried, a swap error came up. I searched for hours, but couldn’t find any solutions to this error, so I had to move my whole Nextcloud installation to a new debian buster container. I took the chance to create the new container as unprivileged container. Since I had no experience moving a complete Nextcloud instance, I first read the NC Wiki and had a look at some tutorials. Finally I followed C. Riegers awesome guide on backing and restoring a Nextcloud instance.
Everything went well until step 9.

root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
Doctrine\DBAL\DBALException: Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [1698] Access denied for user 'ncadmin'@'localhost' in /var/www/nextcloud/lib/private/DB/Connection.php:64

As I’ve been restoring on a brand new LXC Buster container, of course a few things were missing. I restored my nextcloud database, but I also had to recreate the “ncadmin” dbuser and grant the right permissions. I looked up the ncadmin password in my nextcloud config.php and added the user.

mysql -u root -p
CREATE USER 'ncadmin'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES on nextcloud.* to ncadmin@localhost;

Next try with step 9.

root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
...nextcloud Redis server went away in /var/www/nextcloud/lib/private/Memcache/Redis.php:54

Still no success. Hiting google brought me to this link. C. Rieger was already there. 🙂
While checking /etc/redis/redis.conf I noticed that in my nextcloud config.php there was a different path for redis.sock.

redis.conf

unixsocket /var/run/redis/redis-server.sock

config.php

'host' => '/var/run/redis/redis.sock',

After changing the path I rebooted the container and again tried step 9. Now with success and my Nextcloud instance was back online. I only had to add the new hostname to the trusted domains and could login again. The only thing I couldn’t get to work was the NextcloudPi functionality. Since I was only using the nextcloudpi auto upgrade scripts, I could live without that. I disabled and deinstalled the app from the user interface.

[Proxmox] Adding the pve-no-subscription repo

For receiving updates on Proxmox, you have add the pve-no-subscription repo.
First, find the current pve-enterprise repo:

nano /etc/apt/sources.list.d/pve-enterprise.list

Comment out the pve-enterprise repo.

root@pve:~# cat /etc/apt/sources.list.d/pve-enterprise.list
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise

To add the pve-no-subscription repo, create a new file called pve-no-subscription.list

nano /etc/apt/sources.list.d/pve-no-subscription.list

and insert the repo:

root@pve:~# cat /etc/apt/sources.list.d/pve-no-subscription.list 
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb https://download.proxmox.com/debian/pve buster pve-no-subscription

# security updates
deb https://security.debian.org/debian-security buster/updates main contrib

[Docker] Fefe über Container Technologien

https://blog.fefe.de/?ts=a0d07bd8

“Wisst ihr, was mir in dunklen Zeiten wie dieser Jahreszeit Erheiterung ins Leben bringt? Dieser Spirale zuzugucken:

  1. Unsere Software ist zu komplex, wir haben die Komplexität nicht im Griff! Pass auf, wir machen da ein verteiltes System daraus! Dann sind die Einzelteile weniger komplex. Vielleicht können wir das dann unter Kontrolle bringen.
  2. Das verteilte System braucht viel mehr administrativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Container! Docker!
  3. Docker-Aufsetzen braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Kubernetes!
  4. Kubernetes braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Ansible!
  5. Ansible braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Chef / Salt!

Frank hat im letzten Alternativlos das wunderbare Wort “Komplexitätsverstärker” eingeführt. Das ist genau, was hier passiert. Am Ende hast du ein Schönwettersystem. Wenn das erste Mal der Wind dreht, dann hast du einen Scherbenhaufen. Niemand kann diese ganze Komplexität mehr durchblicken.”

[WordPress] SyntaxHighlighter Ampersand character

Recently I noticed that the character & is displayed in the SyntaxHighlighter like this: &amp

To fix this, simply add this snippet of the user kaggdesign to /var/www/html/wp-content/plugins/syntaxhighlighter/syntaxhighlighter.php

/**
 * Filter to fix issue with & in SyntaxHighlighter Evolved plugin.
 *
 * @param string $code Code to format.
 * @param array $atts Attributes.
 * @param string $tag Tag.
 *
 * @return string
 */
function kagg_syntaxhighlighter_precode( $code, $atts, $tag ) {
	if ( 'code' === $tag ) {
		$code = wp_specialchars_decode( $code );
	}
	return $code;
}
add_filter( 'syntaxhighlighter_precode', 'kagg_syntaxhighlighter_precode', 10, 3 );

This can be done directly from the webinterface. Just go to Plugins -> Plugin Editor -> select the Plugin SyntaxHighlighter Evolved -> add the snippet to the end

[Proxmox] Scrub cronjob

Default scrub cronjob when installing Proxmox on ZFS:

nocin@pve:~$ cat /etc/cron.d/zfsutils-linux 
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Scrub the second Sunday of every month.
24 0 8-14 * * root [ $(date +\%w) -eq 0 ] && [ -x /usr/lib/zfs-linux/scrub ] && /usr/lib/zfs-linux/scrub