* ab ABAP 7.54
DATA field TYPE p decimals 2.
field += 4.
field -= 2.
field *= 3.
field /= 2.
*obsolet: ADD, SUBSTRACT, MULTIPLY, DIVIDE
[ABAP] Display a database table
cl_salv_gui_table_ida=>create( iv_table_name = 'SFLIGHT' )->fullscreen( )->display( ).
Example report in your system: SALV_IDA_DISPLAY_DATA_SIMPLE
[ABAP] Alpha conversion
DATA(lv_matnr) = VALUE matnr( 0000000001 ).
DATA(character_string) = VALUE string( ).
character_string = |Your Material Number is { lv_matnr ALPHA = IN }|. "Adds leading zeros
character_string = |Your Material Number is { lv_matnr ALPHA = OUT }|. "Removes leading zeros
[ABAP Env] Create Data Model & OData Service
Recently I worked through the tutorial on creating a travel bookings app in the SAP Cloud Platform ABAP Environment.
Find a good introduction and overview on this topic here: Getting Started with ABAP in the Cloud – Part I
And the travel bookings app tutorial here: Getting Started with ABAP in the Cloud – Part II
These are my notes on the steps needed to create the data model and publish it as oData service.
# | Layer | Nomenclature | Description |
---|---|---|---|
1 | Database Table | ZTABLE | Place your raw data first |
2 | Data Definition (Interface View) | ZI_ | Relation between different tables (e.g. currency or text table) |
3 | Projection View (Consumption View) | ZC_ | Configure the UI depending on your scenario. Use different projection views for different usages of the same interface view and the same physical table. |
4 | Service Definition | ZSD_ | Expose the projection view (and underlying associations like currency, country…) as service |
5 | Service Binding | ZSB_ | How to we want to make the service available? Defines the binding type (OData V2 / OData V4) Activate it with the “Activate” Button within the editor window. Select the Entity and hit “Preview…” to see whtat we defined in our projection view. |
If you’ve done this, you are able to view the data in a generated Fiori Elements app. But if you also want to create, edit, delete data, you’ll have to add some behavior functionality.
6 | Behavior Definition on Data Definition | ZI_ | Created on top of the Data Definition. Will get the same name es the Data Definition. Implementation Type: Managed Defines the operations create, delete, edit. |
7 | Behavior Implementation on Definition View | ZBP_I_ | The code for the behavior… For the travel app tutorial, some logic for a generated unique key and field validation. The class inherits from cl_abap_behavior_handler. |
8 | Behavior Definition on Projection View | ZC_ | Created on top of the Projection View. Will get the same name es the Projection View. Defines the operations create, delete, edit. |
[Docker] Usefull commands
Image Handling
docker image list | list downloaded images |
docker rmi image_name | delete image |
Administration
docker system df | show docker disk usage |
docker system prune | free space – remove stopped containers, images, cache |
systemctl restart docker.service | restarts the docker service (and all your container) |
ss -tulpn | check if docker containers listen to any port |
docker exec contaienr_id cat /etc/hosts or docker inspect -f ‘{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}’ contaienr_id | check container ip address |
Container Handling
docker ps | list running containers |
docker ps -a | list all the docker containers (running and stopped) |
docker stop container_id | stop container |
docker rm container_id | delete stopped container |
docker update –restart=unless-stopped container_id | make sure container re-start, unless manually stopped |
docker run -l debug container_id | run container with log |
docker logs -f container_id | display log |
docker exec -it container_id /bin/sh | open a shell in the running container |
docker commit container_id user/test_image | this command saves modified container state into a new image user/test_image |
docker run -ti –entrypoint=sh user/test_image | run with a different entrypoint |
docker run –volume-driver=nfs container_id | mount NFS share |
Docker Compose
docker-compose -f ~/docker/docker-compose.yml up -d | The -d option daemonizes it in the background |
docker-compose -f ~/docker/docker-compose.yml down | completely stop and remove containers, images, volumes, and networks (go back to how it was before running docker compose file) |
docker-compose -f ~/docker/docker-compose.yml pull | Pull latest images |
docker-compose logs container_id | check real-time logs |
docker-compose stop container_id | stops a running container |
docker-compose config | test your.env file which is used for variable substitution in the docker-compose.yaml |
[Shell] User and Group management & File permissions
- User and Group management
- id
- useradd
- -c – Full name
- -e – Expiration date
- -s – Default shell
- -d – Home directory
- passwd
- usermod
- -l – rename
- -L – Lock
- -U – unlock
- userdel
- -r – remove user data
- groupadd
- groupmod
- gpasswd [-a -d -A] [user1, user2] [group]
- newgrp [group]
- su vs. su – vs. sudo
- visudo
- File permissions
- UGO – User, Group, Other
- RWX – Read, Write, Execute
- chmod -R g+x (grant recursive execute permission to group)
- r = 4
- w = 2
- x = 1
- – = 0
- rwxrwxrwx = 777
- rw-rw-rw- = 666
- rwxrwxr–- = 774
- rw-rw—- = 660
- rw-r—–- = 640
- chown
- chgrp
- umask
https://www.sluug.org/resources/presentations/2020/2020-02-12_permissions.pdf
[Proxmox] NFSv4 client saves files as “nobody” and “nogroup” on ZFS Share
I’m running a Proxmox Cluster with PVE1 and PVE2. On PVE2 a VM is running Debian Buster, which is mounting an zfs nfs share from PVE1. Inside the VM a script is running as root saving a backup on this nfs share. If I create a file locally (Test1) on PVE1, the owner is of course root. But since a few weeks the script running inside the VM is creating all files as nobody (Test2).
# ls -all /mnt/nfs/data
drwxr-xr-x 2 root root 4096 Jul 5 07:19 Test1
drwxr-xr-x 2 nobody nogroup 4096 Jul 5 07:21 Test2
This is because root users are mapped to different user id’s and group’s when changing files on an nfs share. But until now, this was no problom when enabling nfs on a dataset via
zfs set sharenfs=on zpool/data
because the no_root_squash was set by default. But it looks like this was a changed in ZFS on Linux 0.8.3 and the no_root_squash option isn’t set by default anymore. To enable it again use:
zfs set sharenfs='rw,no_root_squash' zpool/data
Another way is exporting the folder via /etc/exports and adding the no_root_squash option.
# sudo nano /etc/exports
/zpool/data/ *(rw,no_subtree_check,sync,insecure,no_root_squash)
Run sudo exportfs -a after editing the exports file to enable these changes immediately.
[Nextcloud] Moving my NC installation
About two years ago I installed Nextcloud via the NextcloudPi script in an LXC Debian Stretch Container on my Proxmox Host. Since last year there is a new Debian release called Buster and I wanted to upgrade my container. But somehow it was not possible… there was something broken and on every upgrade I tried, a swap error came up. I searched for hours, but couldn’t find any solutions to this error, so I had to move my whole Nextcloud installation to a new debian buster container. I took the chance to create the new container as unprivileged container. Since I had no experience moving a complete Nextcloud instance, I first read the NC Wiki and had a look at some tutorials. Finally I followed C. Riegers awesome guide on backing and restoring a Nextcloud instance.
Everything went well until step 9.
root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
Doctrine\DBAL\DBALException: Failed to connect to the database: An exception occurred in driver: SQLSTATE[HY000] [1698] Access denied for user 'ncadmin'@'localhost' in /var/www/nextcloud/lib/private/DB/Connection.php:64
As I’ve been restoring on a brand new LXC Buster container, of course a few things were missing. I restored my nextcloud database, but I also had to recreate the “ncadmin” dbuser and grant the right permissions. I looked up the ncadmin password in my nextcloud config.php and added the user.
mysql -u root -p
CREATE USER 'ncadmin'@'localhost' IDENTIFIED BY 'password';
GRANT ALL PRIVILEGES on nextcloud.* to ncadmin@localhost;
Next try with step 9.
root@nc:/var/www/nextcloud# sudo -u www-data php /var/www/nextcloud/occ maintenance:data-fingerprint
An unhandled exception has been thrown:
...nextcloud Redis server went away in /var/www/nextcloud/lib/private/Memcache/Redis.php:54
Still no success. Hiting google brought me to this link. C. Rieger was already there. 🙂
While checking /etc/redis/redis.conf
I noticed that in my nextcloud config.php there was a different path for redis.sock.
redis.conf
unixsocket /var/run/redis/redis-server.sock
config.php
'host' => '/var/run/redis/redis.sock',
After changing the path I rebooted the container and again tried step 9. Now with success and my Nextcloud instance was back online. I only had to add the new hostname to the trusted domains and could login again. The only thing I couldn’t get to work was the NextcloudPi functionality. Since I was only using the nextcloudpi auto upgrade scripts, I could live without that. I disabled and deinstalled the app from the user interface.
[Proxmox] Adding the pve-no-subscription repo
For receiving updates on Proxmox, you have add the pve-no-subscription
repo.
First, find the current pve-enterprise
repo:
nano /etc/apt/sources.list.d/pve-enterprise.list
Comment out the pve-enterprise
repo.
root@pve:~# cat /etc/apt/sources.list.d/pve-enterprise.list
#deb https://enterprise.proxmox.com/debian/pve buster pve-enterprise
To add the pve-no-subscription
repo, create a new file called pve-no-subscription.list
nano /etc/apt/sources.list.d/pve-no-subscription.list
and insert the repo:
root@pve:~# cat /etc/apt/sources.list.d/pve-no-subscription.list
# PVE pve-no-subscription repository provided by proxmox.com,
# NOT recommended for production use
deb https://download.proxmox.com/debian/pve buster pve-no-subscription
# security updates
deb https://security.debian.org/debian-security buster/updates main contrib
[Docker] Fefe über Container Technologien
https://blog.fefe.de/?ts=a0d07bd8
“Wisst ihr, was mir in dunklen Zeiten wie dieser Jahreszeit Erheiterung ins Leben bringt? Dieser Spirale zuzugucken:
- Unsere Software ist zu komplex, wir haben die Komplexität nicht im Griff! Pass auf, wir machen da ein verteiltes System daraus! Dann sind die Einzelteile weniger komplex. Vielleicht können wir das dann unter Kontrolle bringen.
- Das verteilte System braucht viel mehr administrativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Container! Docker!
- Docker-Aufsetzen braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Kubernetes!
- Kubernetes braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Ansible!
- Ansible braucht viel mehr administativen Aufwand. Pass auf, den automatisieren wir weg! Wir machen Chef / Salt!
Frank hat im letzten Alternativlos das wunderbare Wort “Komplexitätsverstärker” eingeführt. Das ist genau, was hier passiert. Am Ende hast du ein Schönwettersystem. Wenn das erste Mal der Wind dreht, dann hast du einen Scherbenhaufen. Niemand kann diese ganze Komplexität mehr durchblicken.”