this.getModel().read("/Object", {
filters: [
new Filter({
path: "firstName",
operator: FilterOperator.EQ,
value1: "Max"
}),
new Filter({
path: "lastName",
operator: FilterOperator.EQ,
value1: "Mustermann"
})
],
success: oData => { },
error: err => { }
});
[nodejs] read and write a file
https://nodejs.dev/learn/reading-files-with-nodejs
https://nodejs.dev/learn/writing-files-with-nodejs
const fs = require("fs")
try {
// read from local folder
const localPDF = fs.readFileSync('PDFs/myFile.pdf')
//write back to local folder
fs.writeFileSync('PDFs/writtenBack.pdf', localPDF )
} catch (err) {
console.error(err)
}
Converting to Base64
try {
// read from local folder
const localPDF = fs.readFileSync('PDFs/myFile.pdf')
const localBase64 = localPDF.toString('base64')
//write back to local folder
fs.writeFileSync(`PDFs/writtenBack.pdf`, localBase64, {encoding: 'base64'})
} catch (err) {
console.error(err)
}
Reading and writing using streams with pipe
//read and write local file
const reader = fs.createReadStream("PDFs/myFile.pdf")
const writer = fs.createWriteStream('PDFs/writtenBack.pdf');
reader.pipe(writer)
[SAPUI5] Binding with filter on XML View
https://sapui5.hana.ondemand.com/sdk/#/topic/5338bd1f9afb45fb8b2af957c3530e8f.html
There are two ways to use a filter.
Option 1:
items="{
path: '/myItems',
parameters : {
$filter : 'itemName eq \'myItemName\'',
$orderby : 'createdAt desc'
},
}">
Option 2:
items="{
path: '/myItems',
parameters : {
$orderby : 'createdAt desc'
},
filters : {
path : 'itemName ',
operator : 'EQ',
value1 : 'myItemName'
},
}">
[ABAP] Read components of a dynamic structure
DATA(lo_structdescr) = CAST cl_abap_structdescr( cl_abap_structdescr=>describe_by_data( p_data = <dynamic_structure> ) ).
DATA(components) = lo_structdescr->get_components( ).
IF line_exists( components[ name = 'FIELD1' ] ).
ASSIGN COMPONENT 'FIELD1' OF STRUCTURE <dynamic_structure> TO FIELD-SYMBOL(<field1>).
"do stuff...
ENDIF.
[SAP] Namensraum öffnen für Änderungen
Namensraum anlegen
Tcode: SE03

Editierbarkeit von Objekten eines Namensraumes zulassen
Tcode: SE03

Ggf. muss in der SE80 noch der Modifikationsassistant ausgeschaltet werden.
SE80 -> Bearbeitten -> Modifikationsoperationen -> Assistant ausschalten
[SAPUI5] Toogle Dark mode from Shell Header
There are several different types of buttons you can add to the Shell Header: https://sapui5.hana.ondemand.com/sdk/#/api/sap.ushell.renderers.fiori2.Renderer%23methods/Summary
For my test I choose the “addHeaderEndItem” Button. Add the fowlloing logic in the Component.js file to create the button and the logic for switching the theme:
_addHeaderButton: function () {
const oRenderer = sap.ushell.Container.getRenderer("fiori2");
oRenderer.addHeaderEndItem("sap.ushell.ui.shell.ShellHeadItem", {
id: "toogleTheme",
icon: "sap-icon://circle-task-2",
visible: "{device>/orientation/landscape}",
tooltip: "Switch Theme",
press: (oEvent) => {
const toogleButton = oEvent.getSource();
if (toogleButton.getIcon() === "sap-icon://circle-task-2") {
sap.ui.getCore().applyTheme("sap_fiori_3_dark");
toogleButton.setIcon("sap-icon://circle-task");
} else {
sap.ui.getCore().applyTheme("sap_fiori_3");
toogleButton.setIcon("sap-icon://circle-task-2");
}
}
}, true);
},
Afterwars you need call the method in the init() function of the component. No reload the app and you will find the new button in the top right corner. Pressing will switch the theme to dark or back to light theme.


[Docker] OCI runtime create failed on Ubuntu 18.04.
Yesterday after rebooting my Server running Ubuntu 18.04. I couldn’t run most of my Docker Container. Strangely, some worked and some did not. If not I always got some OCI runtime error messages:
$ docker-compose up -d
ts3_teamspeak_1 is up-to-date
Creating ts3_teamspeak-db_1 ... error
ERROR: for ts3_teamspeak-db_1 Cannot start service teamspeak-db: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:402: getting the final child's pid from pipe caused: EOF: unknown
After googling a bit, I found the solution. I did an apt upgrade before rebooting and my Docker version was updated to v5.20. And it seems that Ubuntu 18.04. and Docker v5.20 are not working well together. Therefore I had to downgrade docker to v5.18. Find more here.
apt install docker-ce=5:18.09.1~3-0~ubuntu-bionic
apt install containerd.io=1.2.2-1
[Proxmox] Installing Home Assistant
You can install Home Assistant (HA) as LXC or VM on Proxmox. Or even put HA as docker container on top of a LXC or VM, but passing through hardware (like ConBee II) will become much more complicated. There a many installation guides i.e.
https://community.home-assistant.io/t/installing-home-assistant-using-proxmox/201835
https://www.x33u.org/docs/server/home-assistant_proxmox-vm/
https://www.juanmtech.com/install-proxmox-and-virtualize-home-assistant/
and there are few scripts which automate the installing process. Unfortunately some of them doesn’t work anymore for PVE 7 i.e.
https://github.com/whiskerz007/proxmox_hassio_lxc
https://github.com/whiskerz007/proxmox_hassos_install/
The only script that is working (while writing this) is this one https://github.com/tteck/proxmox_haos_vm which may be is a fork of whiskerz007’s previous script.
[Terminal] Using rsync with –backup and –delete together
I’m using rsync to create backups from my NAS to an external HDD. The command looks like this:
rsync -azP --delete --exclude=/.zfs -b --backup-dir=Backup /mnt/nfs/photos/ /media/nocin/externalBackup/photos/
| rsync parameter | description |
| -a –archive | This is equivalent to -rlptgoD. It is a quick way of saying you want recursion and want to preserve almost everything. |
| -z –compress | With this option, rsync compresses the file data as it is sent to the destination machine, which reduces the amount of data being transmitted |
| -P | The -P option is equivalent to –partial –progress. Its purpose is to make it much easier to specify these two options for a long transfer that may be interrupted. |
| –delete | This tells rsync to delete extraneous files from the receiving side (ones that aren’t on the sending side) |
| –exclude | exclude files matching PATTERN |
| -b –backup-dir | With this option preexisting destination files are renamed with a ~ extension as each file is transferred. You can control where the backup file goes and what (if any) suffix gets appended using the –backup-dir and –suffix options. |
But somehow it always created the Backup folder recursively again inside the Backup folder. So the first run created the /Backup folder, after the second run I’ve got /Backup/Backup, after the third run /Backup/Backup/Backup and so on..
The solution was to exclude the Backup directory using the --exclude command.
rsync -azP --delete --exclude=/.zfs --exclude=Backup -b --backup-dir=Backup /mnt/nfs/photos/ /media/nocin/externalBackup/photos/
I found a good explanation for this behaviour here: https://www.jveweb.net/en/archives/2011/02/using-rsync-and-cron-to-automate-incremental-backups.html
“If we are storing backups in the destination folder, or in a directory inside of the destination folder, the --delete parameter is going to delete old backups, as they are not in the source folder. Or attempt to as in the following situation:
Say, we already have a folder called backup inside of the destination directory, and we use rsync again, using --backup-dir=backup one more time. As rsync is going to attempt to delete every file and folder that is not in the source directory, it would backup the backup folder, which would create a backup folder inside our already existing backup folder, and then it would attempt to delete the backup folder and fail because it is using it to backup files.”
