Unfortunately, the app crashes on my Android phone. If this is your case as well, proceed as follows:
eduroam
to setup this wifieduroam@hu-berlin.de
hu-berlin.de
Note that other universities may require other setups.
I have created this certificate file with openssl x509 -inform PEM -outform DER -in CA.pem -out hu-ca-2024.crt
using the CA.pem
extracted from the eduroam setup for my PC. ↩︎
This is my first blog post on the topic of LibreOffice. Let me quickly explain my link to LibreOffice. I work for a data protection authority in the EU and help navigate the digital transformation of our office with about 100 staff members. While many of our partner organisations adopt Microsoft 365, our office decided to pilot Nextcloud with Collabora Office Online.
In the future, I want to blog (in my personal capacity) about my thoughts related to the use of alternative word processing software in the public sector in general and in our specific case.
As there are no dedicated resources for training, preparation of templates etc., during the pilot of LibreOffice, the experience so far covers a large spectrum of user satisfaction. Generally, our staff has been spent years of their life using Microsoft Office and has the expectation that any other software works the same way. If it does not, they send an email to me (best case) or switch back to Microsoft Office.
During the week, I discussed again some smaller LibreOffice bugs. Then, I showed this weekend some FOSS Blender animated short videos to family members. It seems that Blender is more successful in its domain than LibreOffice. Is that possible? Or are animated short videos just more capturing due to their special effects? 😅
You can watch the 1min Blender animated short movie “CREST” on Youtube or the making-off. The latter you find here below.
I find it very inspiring to see what talented artists can do with Blender. For my part, I have once installed Blender and deinstalled it. Back then it was not easy to use for people not familiar with video animation software. Blender competes with proprietary software such as Maya or Cinema 4D. The latter is about 60 USD per month in the annual subscription plan. Not exactly cheap.
Then, I read in the fediverse about people working with LibreOffice:
I just tried to use #LibreOffice #Draw to draw some arrows and boxes onto JPEG images for emphasizing stuff.
The UX is really bad for somebody not working with Draw all the time.
Whatever I do, instead of drawing onto the image, the image gets selected instead.
Could not find any layer-sidebar.
Could not scale text without starting the “Character …” menu, modifying font size blindly + confirming > just to see its effect and then start all over.
Dear #FOSS, we really should do better.
— Author Karl Voit (12 November 2023 at 14:51)
In my past, I have worked on online voting systems. They are not very good yet despite years of efforts. xkcd dedicated a comic to voting software
Elections seem simple—aren’t they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers.
— Author Matthew Bernhard et al. in their paper Public Evidence from Secret Ballots from 2017
What is the unique challenge of developing word processing software? Happy to hear back from you in the blog comments or on the companion fediverse post!
]]>Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.
Few remarks to start with:
apt install docker docker.io caddy jq git
create an unprivileged user account, e.g. mastodon
adduser --disabled-login mastodon
adduser mastodon docker
adduser mastodon sudo # optional, remove later
su mastodon # switch to that user
my docker compose: Docker Compose version v2.2.3 (based on go)
install docker compose in 3 lines:
mkdir -p ~/.docker/cli-plugins
curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
my testing domain (for this example): social.host
my dot-env file .env
for docker compose:
LETS_ENCRYPT_EMAIL=admin-mail@social.host
MASTODON_DOMAIN=social.host
FRONTEND_SUBNET="172.22.0.0/16"
# check the latest version here: https://hub.docker.com/r/tootsuite/mastodon/tags
MASTODON_VERSION=v3.4.6
I have commented out build: .
, because I prefer to rely on the official images from Docker Hub.
With little effort, we enable as well full-text search with elasticsearch.
The setup places all databases and uploaded files in the folder mastodon
.
We use a named volume mastodon-public
to expose the static files from the mastodon-web
container to the Caddy webserver. Caddy serves directy static files for improved speed. Awesome! :star2:
The setup comes with the Mastodon Twitter Crossposter. You need to setup an extra subdomain for it. Remove it from the docker-compose.yml
in case you have no use for it.
Using extra_host
, we expose with "host.docker.internal:host-gateway"
the Docker host to the Mastodon Sidekiq container in case that you configure Mastodon to use a mail transfer agent (e.g. postfix) running on the host. In that case, use SMTP_SERVER=host.docker.internal
.
# file: 'docker-compose.yml'
version: "3.7"
services:
caddy:
image: caddy:2-alpine
restart: unless-stopped
container_name: caddy
ports:
- "80:80"
- "443:443"
volumes:
- ./caddy/etc-caddy:/etc/caddy
- ./caddy/data:/data # Optional
- ./caddy/config:/config # Optional
- ./caddy/logs:/logs
- mastodon-public:/srv/mastodon/public:ro
env_file: .env
# helps crossposter resolve the mastodon server internally
hostname: "${MASTODON_DOMAIN}"
networks:
frontend:
aliases:
- "${MASTODON_DOMAIN}"
networks:
- frontend
- backend
mastodon-db:
restart: always
image: postgres:14-alpine
container_name: "mastodon-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- "./mastodon/postgres:/var/lib/postgresql/data"
networks:
- backend
mastodon-redis:
restart: always
image: redis:6.0-alpine
container_name: "mastodon-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./mastodon/redis:/data
networks:
- backend
mastodon-elastic:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
container_name: "mastodon-elastic"
healthcheck:
test: curl --silent --fail localhost:9200/_cluster/health || exit 1
environment:
ES_JAVA_OPTS: "-Xms512m -Xmx512m"
cluster.name: es-mastodon
discovery.type: single-node
bootstrap.memory_lock: "true"
volumes:
- ./mastodon/elasticsearch:/usr/share/elasticsearch/data
networks:
- backend
ulimits:
memlock:
soft: -1
hard: -1
mastodon-web:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-web"
healthcheck:
test: wget -q --spider --proxy=off localhost:3000/health || exit 1
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
expose:
- "3000"
depends_on:
- mastodon-db
- mastodon-redis
- mastodon-elastic
volumes:
# https://www.digitalocean.com/community/tutorials/how-to-share-data-between-docker-containers
- mastodon-public:/opt/mastodon/public # map static files in volume for caddy
- ./mastodon/public/system:/opt/mastodon/public/system
networks:
- frontend
- backend
extra_hosts:
- "host.docker.internal:host-gateway"
mastodon-streaming:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-streaming"
healthcheck:
test: wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1
]
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: node ./streaming
expose:
- "4000"
depends_on:
- mastodon-db
- mastodon-redis
networks:
- frontend
- backend
mastodon-sidekiq:
restart: always
image: "tootsuite/mastodon:${MASTODON_VERSION}"
container_name: "mastodon-sidekiq"
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
env_file: mastodon.env.production
environment:
LOCAL_DOMAIN: "${MASTODON_DOMAIN}"
SMTP_FROM_ADDRESS: "notifications@${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
command: bundle exec sidekiq
depends_on:
- mastodon-db
- mastodon-redis
volumes:
- ./mastodon/public/system:/mastodon/public/system
networks:
- frontend
- backend
extra_hosts:
- "host.docker.internal:host-gateway"
crossposter-db:
restart: always
image: postgres:14-alpine
container_name: "crossposter-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- ./crossposter/postgres:/var/lib/postgresql/data
networks:
- backend
crossposter-redis:
restart: always
image: redis:6.0-alpine
container_name: "crossposter-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./crossposter/redis:/data
networks:
- backend
crossposter-web:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-web"
env_file: crossposter.env.production
environment:
CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
expose:
- "3000"
depends_on:
- crossposter-db
networks:
- frontend
- backend
crossposter-sidekiq:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-sidekiq"
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
CROSSPOSTER_DOMAIN: "https://crossposter.${MASTODON_DOMAIN}"
command: bundle exec sidekiq -c 5 -q default
depends_on:
- crossposter-db
- crossposter-redis
networks:
- frontend
- backend
volumes:
mastodon-public:
networks:
frontend:
name: "${COMPOSE_PROJECT_NAME}_frontend"
ipam:
config:
- subnet: "${FRONTEND_SUBNET}"
backend:
name: "${COMPOSE_PROJECT_NAME}_backend"
internal: true
The web server Caddy is configured using a Caddyfile
stored in ./caddy/etc-caddy
. I started with a config I found on Github.
# file: 'Caddyfile'
# kate: indent-width 8; space-indent on;
{
# Global options block. Entirely optional, https is on by default
# Optional email key for lets encrypt
email {$LETS_ENCRYPT_EMAIL}
# Optional staging lets encrypt for testing. Comment out for production.
# acme_ca https://acme-staging-v02.api.letsencrypt.org/directory
# admin off
}
{$MASTODON_DOMAIN} {
log {
# format single_field common_log
output file /logs/access.log
}
root * /srv/mastodon/public
encode gzip
@static file
handle @static {
file_server
}
handle /api/v1/streaming* {
reverse_proxy mastodon-streaming:4000
}
handle {
reverse_proxy mastodon-web:3000
}
header {
Strict-Transport-Security "max-age=31536000;"
}
header /sw.js Cache-Control "public, max-age=0";
header /emoji* Cache-Control "public, max-age=31536000, immutable"
header /packs* Cache-Control "public, max-age=31536000, immutable"
header /system/accounts/avatars* Cache-Control "public, max-age=31536000, immutable"
header /system/media_attachments/files* Cache-Control "public, max-age=31536000, immutable"
handle_errors {
@5xx expression `{http.error.status_code} >= 500 && {http.error.status_code} < 600`
rewrite @5xx /500.html
file_server
}
}
crossposter.{$MASTODON_DOMAIN} {
log {
# format single_field common_log
output file /logs/access-crossposter.log
}
encode gzip
handle {
reverse_proxy crossposter-web:3000
}
}
With these file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.
# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch
# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup
# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup
# crossposter
mkdir crossposter
docker-compose run --rm crossposter-web bundle exec rake db:setup
# launch mastodon and crossposter
docker compose run -d
# look into the logs, -f for live logs
docker compose logs -f
I had much problems to let the mastodon container connect to a mail transport agent (MTA) of my host. Eventually, I solved it with an extra filewall rule: ufw allow proto tcp from any to 172.17.0.1 port 25
.
The mail issue can be avoided by a) using a SaaS such as (mailgun/mailjet/sendinblue) or b) using another Docker container with postfix that is in the frontend network as well. Look at Peertube’s docker-compose file for some inspiration.
Caddyfile
for Mastodon on Github)Our starting point is the docker-compose.yml shipped with the Mastodon code. Why is it not enough? It assumes you setup up proxy with HTTPS endpoints yourself. So let’s integrate this as well in Docker.
Consider also the compact setup with the Caddy webserver.
Few remarks to start with:
apt install docker docker.io jq git
create an unprivileged user account, e.g. mastodon
adduser --disabled-login mastodon
adduser mastodon docker
adduser mastodon sudo # optional, remove later
su mastodon # switch to that user
my docker compose: Docker Compose version v2.2.3 (based on go)
install docker compose in 3 lines:
mkdir -p ~/.docker/cli-plugins
curl -sSL https://github.com/docker/compose/releases/download/v2.2.3/docker-compose-linux-x86_64 -o ~/.docker/cli-plugins/docker-compose
chmod +x ~/.docker/cli-plugins/docker-compose
my testing domain (for this example): social.host
my dot-env file .env
for docker compose:
LETS_ENCRYPT_EMAIL=admin-mail@social.host
MASTODON_DOMAIN=social.host
I have commented out build: .
, because I prefer to rely on the official images from Docker Hub.
With little effort, I enable as well full-text search with elasticsearch.
The support of VIRTUAL_PATH
is brand-new in nginx-proxy. It is not yet in the main branch, so that we rely on nginxproxy/nginx-proxy:dev-alpine
.
The Mastodon code also ships an nginx configuration. However, nginx-proxy creates much of it as well, so that I currently believe no further configuration is required here. However, nginx-proxy allows to add custom elements to the generated configuration.
mastodon
# file: 'docker-compose.yml'
version: "3.7"
services:
nginx-proxy:
image: nginxproxy/nginx-proxy:dev-alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/conf:/etc/nginx/conf.d
- ./nginx/vhost:/etc/nginx/vhost.d
- html:/usr/share/nginx/html
- ./nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
- ./nginx/logs:/var/log/nginx
networks:
- external_network
- internal_network
acme-companion:
image: nginxproxy/acme-companion
container_name: nginx-proxy-acme
volumes_from:
- nginx-proxy
volumes:
- ./nginx/certs:/etc/nginx/certs:rw
- ./nginx/acme:/etc/acme.sh
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
DEFAULT_EMAIL: "${LETS_ENCRYPT_EMAIL}"
networks:
- external_network
db:
restart: always
image: postgres:14-alpine
shm_size: 256mb
networks:
- internal_network
healthcheck:
test: ["CMD", "pg_isready", "-U", "postgres"]
volumes:
- .mastodon//postgres14:/var/lib/postgresql/data
environment:
POSTGRES_HOST_AUTH_METHOD: trust
redis:
restart: always
image: redis:6-alpine
networks:
- internal_network
healthcheck:
test: ["CMD", "redis-cli", "ping"]
volumes:
- ./mastodon/redis:/data
# elasticsearch
es:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.10
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "cluster.name=es-mastodon"
- "discovery.type=single-node"
- "bootstrap.memory_lock=true"
networks:
- internal_network
healthcheck:
test: ["CMD-SHELL", "curl --silent --fail localhost:9200/_cluster/health || exit 1"]
volumes:
- ./mastodon/elasticsearch:/usr/share/elasticsearch/data
ulimits:
memlock:
soft: -1
hard: -1
web:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: bash -c "rm -f /mastodon/tmp/pids/server.pid; bundle exec rails s -p 3000"
networks:
- external_network
- internal_network
healthcheck:
test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:3000/health || exit 1"]
ports:
- "127.0.0.1:3000:3000"
depends_on:
- db
- redis
- es
volumes:
- ./mastodon/public/system:/mastodon/public/system
environment:
VIRTUAL_HOST: "${MASTODON_DOMAIN}"
VIRTUAL_PATH: "/"
VIRTUAL_PORT: 3000
LETSENCRYPT_HOST: "${MASTODON_DOMAIN}"
ES_HOST: mastodon-elastic
ES_ENABLED: true
streaming:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: node ./streaming
networks:
- external_network
- internal_network
healthcheck:
test: ["CMD-SHELL", "wget -q --spider --proxy=off localhost:4000/api/v1/streaming/health || exit 1"]
ports:
- "127.0.0.1:4000:4000"
depends_on:
- db
- redis
environment:
VIRTUAL_HOST: "${MASTODON_DOMAIN}"
VIRTUAL_PATH: "/api/v1/streaming"
VIRTUAL_PORT: 4000
sidekiq:
# build: .
image: tootsuite/mastodon:v3.4.6
restart: always
env_file: mastodon.env.production
command: bundle exec sidekiq
depends_on:
- db
- redis
networks:
# - external_network
- internal_network
volumes:
- ./mastodon/public/system:/mastodon/public/system
volumes:
html:
networks:
external_network:
internal_network:
internal: true
With this file in place, create a few more folders and launch the setup of the instance. If the instance has been setup before, a database setup may be enough.
# mastodon
touch mastodon.env.production
sudo chown 991:991 mastodon.env.production
mkdir -p mastodon/public
sudo chown -R 991:991 mastodon/public
mkdir -p mastodon/elasticsearch
sudo chmod g+rwx mastodon/elasticsearch
sudo chgrp 0 mastodon/elasticsearch
# first time: setup mastodon
# https://github.com/mastodon/mastodon/issues/16353 (on RUBYOPT)
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production -e RUBYOPT=-W0 web bundle exec rake mastodon:setup
# subsequent times: skip generation of config and only setup database
docker compose run --rm -v $(pwd)/mastodon.env.production:/opt/mastodon/.env.production web bundle exec rake db:setup
# launch mastodon
docker compose run -d
# look into the logs, -f for live logs
docker compose logs -f
To setup the Mastodon Twitter Poster for crossposting, add the following services to the docker-compose.yml
crossposter.env.production
with content adapted from https://github.com/renatolond/mastodon-twitter-poster/blob/main/.env.examplecreate a directory crossposter
crossposter.env.production
also RAILS_LOG_TO_STDOUT=enabled
(Github issue)crossposter-db:
restart: always
image: postgres:14-alpine
container_name: "crossposter-db"
healthcheck:
test: pg_isready -U postgres
environment:
POSTGRES_HOST_AUTH_METHOD: trust
volumes:
- ./crossposter/postgres:/var/lib/postgresql/data
networks:
- internal_network
crossposter-redis:
restart: always
image: redis:6.0-alpine
container_name: "crossposter-redis"
healthcheck:
test: redis-cli ping
volumes:
- ./crossposter/redis:/data
networks:
- internal_network
crossposter-web:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-web"
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
DB_HOST: crossposter-db
REDIS_URL: "redis://crossposter-redis"
networks:
- internal_network
- external_network
expose:
- "3000"
depends_on:
- crossposter-db
crossposter-sidekiq:
restart: always
build: https://github.com/renatolond/mastodon-twitter-poster.git#main
image: mastodon-twitter-poster
container_name: "crossposter-sidekiq"
env_file: crossposter.env.production
environment:
ALLOWED_DOMAIN: "${MASTODON_DOMAIN}"
REDIS_URL: "redis://crossposter-redis"
DB_HOST: crossposter-db
command: bundle exec sidekiq -c 5 -q default
healthcheck:
test: ps aux | grep '[s]idekiq\ 6' || false
networks:
# - external_network
- internal_network
depends_on:
- crossposter-db
- crossposter-redis
The crossposter requires a database setup before the containers can be launched:
docker-compose run --rm crossposter-web bundle exec rake db:setup
those travelling in first class, most of them the wealthiest passengers on board, included prominent members of the upper class, businessmen, politicians, high-ranking military personnel, industrialists, bankers, entertainers, socialites, and professional athletes. Second-class passengers were predominantly middle-class travellers and included professors, authors, clergymen, and tourists. Third-class or steerage passengers were primarily immigrants moving to the United States and Canada.
Much has changed since then.
Purpose and passengers of such mega ships (rather kilo ships :wink:) have changed dramatically since then. The ship is no longer a mean of transport. Passengers from Central Europe fly 5000 km to the Orient for a cruise of 500 km to a nearby harbour and back to the point of departure. Many passengers are rather age group 50+ and have cruised around quite a lot already1. Then, there are few younger families and couples as well. Other single travellers fall rather into the category of widows2.
My ship features 1267 twin cabins for 2534 passengers, but if need be, can host up to 2700 passengers–the 1030 crew members excluded. The other ship in the habour, Costas Firenze, has 2116 cabins for up to 5078 passengers (two times the Titanic) and provides for a crew of about 1300 members.
Due to Covid-19, the ships are far from fully booked. In my case the occupation rate was about 40%, a bit more than 1000 people.
Life at sea on this German-operated ship is best compared to Club holidays in Germany3:
Consequently, a cruise on this ship is the perfect fit for all those who would like to hang out with Germans, have German bread and bread rolls for every meal, enjoy Sauerkraut, Klöße, Currywurst and Döner Kebab, but at the same time rather prefer a more Mediterranean climate than what Germany can typically offer! Kind of German holidays outside of Germany.
With 1000 German passengers on board, it is easy to make pictures of the scenic locations without people: At 8 PM, everyone is at dinner! Let me take you on a tour.
I have a few more impressions taken at daytime.
I know because during some show on the deck, the moderator has asked the question who has been on a cruise before and many hands were raised. ↩︎
There was a meetup of single travellers on the ship. However, I didn’t use the occasion to ask them whether they were really widowed. :see_no_evil: ↩︎
Not that I have ever done club holidays in Germany–but that’s how I imagine it! ↩︎
Recently, I removed the comments provided by Disqus from this blog, because Disqus introduced too much data sharing with many third parties. Norway just fined this year Disqus 2,5 Mio Euro for tracking without legal basis.
Please find hereafter some tips on how to export comments from Disqus and display them in a privacy-friendly way in your Jekyll blog.
Navigate to http://disqus.com/admin/discussions/export/ to export your comments to XML format.
The XML has principally 3 parts: meta data, a list with webpages and a list with comments that are linked each to a webpage (via a Disqus identifier) and possibly a parent comment in case the comment is a reply.
For use within Jekyll, I need to restructure the data and have a list of comments for each webpage by my own identifier (e.g. post slug) and convert everything to a format that Jekyll can handle, hence YAML, JSON, CSV, or TSV. I choose YAML.
Install the linux tool xq
to manipulate XML files and export to JSON and the tool jq
. xq
is basically a wrapper of jq
.
pip install xq
Download binaries of jq here: https://stedolan.github.io/jq/download/
I convert then the Disqus XML export into a JSON file with the code in export-disqus-xml2json.sh
import-json-yaml.rb
to split the list of comments into individual files for easy consumption by Jekyll.# file: 'export-disqus-xml2json.sh'
#!/usr/bin/env sh
xq '.disqus | .thread as $threads | .post | map(select(.isDeleted == "false")) | map(.thread."@dsq:id" as $id | ($threads[] | select(."@dsq:id" == $id)) as $thread | {id: ("disqus-"+."@dsq:id"), date: .createdAt, slug: ($thread.id | tostring | gsub("/$";"") | split("/") | last), name: (if .author.name == "Robert" then "Robert Riemann" else .author.name end), avatar: .author | (if has("username") and .username != "rriemann" then "https://disqus.com/api/users/avatars/"+.username+".jpg" else null end), email: .author | (if has("username") and .username == "rriemann" then "my@mail.com" else null end), message, origin: ($thread.link | tostring | gsub("^https://blog.riemann.cc";"")), replying_to: (if has("parent") then ("disqus-"+.parent."@dsq:id") else null end)})' "$@"
Example comment from the JSON list:
{
"id": "disqus-4145062197",
"date": "2018-10-14T22:14:58Z",
"slug": "versioning-of-openoffice-libreoffice-documents-using-git",
"name": "Robert Riemann",
"avatar": null,
"email": "my@mail.com",
"message": "<p>I agree, it is not perfect. I have no solution how to keep the noise out of git.</p>",
"origin": "/2013/04/23/versioning-of-openoffice-libreoffice-documents-using-git/",
"replying_to": "disqus-4136593561"
}
The script import-json-yaml.rb
takes each comment and puts it in YAML format with a unique filenname in the folder named after the slug.
# file: 'import-json-yaml.rb'
#!/usr/bin/env ruby
require 'json'
require 'yaml'
require 'fileutils'
require 'date'
data = if ARGV.length > 0 then
JSON.load_file(ARGV[0])
else
JSON.parse(ARGF.read)
end
data.each do |comment|
FileUtils.mkdir_p comment['slug']
File.write "#{comment['slug']}/#{comment['id']}-#{Date.parse(comment['date']).strftime('%s')}.yml", comment.to_yaml
end
The output with tree
looks like:
_data
├── comments
│ ├── announcing-kubeplayer
│ │ ├── disqus-113988522-1292630400.yml
│ │ └── disqus-1858985256-1424044800.yml
│ ├── requires-owncloud-serverside-backend
│ │ ├── disqus-41270666-1269302400.yml
│ │ ├── disqus-41273219-1269302400.yml
...
Those comments are accessible in jekyll posts/pages via site.data.comments[page.slug]
Most helpful for the integration of comments to Jekyll was the post https://mademistakes.com/mastering-jekyll/static-comments-improved/.
<!-- file: 'my-comments.html' -->
{% assign comments = site.data.comments[page.slug] | sort %}
{% for comment in comments %}
{% assign index = forloop.index %}
{% assign replying_to = comment[1].replying_to | to_integer %}
{% assign avatar = comment[1].avatar %}
{% assign email = comment[1].email %}
{% assign name = comment[1].name %}
{% assign url = comment[1].url %}
{% assign date = comment[1].date %}
{% assign message = comment[1].message %}
{% include comment index=index replying_to=replying_to avatar=avatar email=email name=name url=url date=date message=message %}
{% endfor %}
<!-- file: 'comment' -->
<article id="comment{% unless include.r %}{{ index | prepend: '-' }}{% else %}{{ include.index | prepend: '-' }}{% endunless %}" class="js-comment comment {% if include.name == site.author.name %}admin{% endif %} {% unless include.replying_to == 0 %}child{% endunless %}">
<div class="comment__avatar">
{% if include.avatar %}
<img src="{{ include.avatar }}" alt="{{ include.name | escape }}">
{% elsif include.email %}
<img src="https://www.gravatar.com/avatar/{{ include.email | md5 }}?d=mm&s=60" srcset="https://www.gravatar.com/avatar/{{ include.email | md5 }}?d=mm&s=120 2x" alt="{{ include.name | escape }}">
{% else %}
<img src="/assets/img/avatar-60.jpg" srcset="/assets/img/avatar-120.jpg 2x" alt="{{ include.name | escape }}">
{% endif %}
</div>
<div class="comment__inner">
<header>
<p>
<span class="comment__author-name">
{% unless include.url == blank %}
<a rel="external nofollow" href="{{ include.url }}">
{{ include.name }}
</a>
{% else %}
{{ include.name }}
{% endunless %}
</span>
wrote on
<span class="comment__timestamp">
{% if include.date %}
{% if include.index %}<a href="#comment{% if r %}{{ index | prepend: '-' }}{% else %}{{ include.index | prepend: '-' }}{% endif %}" title="link to this comment">{% endif %}
<time datetime="{{ include.date | date_to_xmlschema }}">{{ include.date | date: '%B %d, %Y' }}</time>
{% if include.index %}</a>{% endif %}
{% endif %}
</span>
</p>
</header>
<div class="comment__content">
{{ include.message | markdownify }}
</div>
</div>
</article>
Like explained in https://mademistakes.com/mastering-jekyll/static-comments/, the software https://staticman.net/ allows to feed POST HTTP requests to Github and Gitlab pull requests, so that comments can be added automatically. Of course, the website requires after each time a rebuild.
I had much trouble to setup Staticman. Eventually, I decided to use a Ruby CGI program that emails me the new comment as an attachment. I like Ruby very much. :wink: Once I figured out how to use the Gitlab API wrapper, I may also use pull requests instead of email attachments.
# file: 'index.rb'
#!/usr/bin/env ruby
Gem.paths = { 'GEM_PATH' => '/var/www/virtual/rriemann/gem' }
require 'cgi'
require 'yaml'
require 'date'
require 'mail'
cgi = CGI.new
# rudimentary validation
unless ENV['HTTP_ORIGIN'] == 'https://blog.riemann.cc' and
ENV['CONTENT_TYPE'] == 'application/x-www-form-urlencoded' and
ENV['REQUEST_METHOD'] == 'POST' and
cgi.params['email']&.first&.strip =~ URI::MailTo::EMAIL_REGEXP and
cgi.params['age']&.first == '' then # age is a bot honeypot
print cgi.http_header("status" => "FORBIDDEN")
print "<p>Error: 403 Forbidden</p>"
exit
end
output = Hash.new
date = DateTime.now
output['id'] = ENV['UNIQUE_ID']
output['date'] = date.iso8601
output['updated'] = date.iso8601
output['origin'] = cgi.params['origin']&.first
output['slug'] = cgi.params['slug']&.first&.gsub(/[^\w-]/, '') # some sanitizing
output['name'] = cgi.params['name']&.first
output['email'] = cgi.params['email']&.first&.downcase&.strip
output['url'] = cgi.params['url']&.first
output['message'] = cgi.params['message']&.join("\n").encode(universal_newline: true)
output['replying_to'] = cgi.params['replying_to']&.first
#Mail.defaults do
# delivery_method :sendmail
#end
Mail.defaults do
delivery_method :smtp, address: "smtp.domain", port: 587, user_name: "smtp_user", password: "smtp_password", enable_starttls_auto: true
end
mail = Mail.new do
from 'no-reply@domain' # 'rriemann'
to 'comments-recipient@domain' # ENV['SERVER_ADMIN']
reply_to output['email']
header['X-Blog-Comment'] = output['slug']
subject "New Comment from #{output['name']} for #{cgi.params['title']&.first}"
body <<~BODY
Hi blog author,
a new comment from #{output['name']} for https://blog.riemann.cc#{output['origin']}:
#{output['message']}
BODY
add_file(filename: "#{output['id']}-#{date.strftime('%s')}.yml", content: output.to_yaml)
end
mail.deliver
if mail.error_status then
print cgi.http_header("status" => "SERVER_ERROR")
cgi.print <<~RESPONSE
<p><b>Error: </b> #{mail.error_status}</p>
<p>An error occured. Please try again later.</p>
<p><a href="javascript:history.back()">Go back</a></p>
RESPONSE
else
print cgi.http_header
cgi.print <<~RESPONSE
<p><b>Thank you</b> for your fedback! Your comment is published after review.</p>
<p><a href="#{output['origin']}">Back to the previous page</a></p>
RESPONSE
end
To make it work with Apache, you may need to add these lines to the Apache configuration (could be a .htaccess
file):
DirectoryIndex index.html index.rb
Options +ExecCGI
SetHandler cgi-script
AddHandler cgi-script .rb
Also in 2021, holiday plans fell victim to yet another Covid-19 wave. Eventually, I waited until mid December and booked in face of the next emerging Covid-19 variant Omicron the least adventures holidays of my adulthood: a week-long all-inclusive cruise in the Orient with every aspect handled in due care by a world-class global-scale tour operator.
For the sake of completeness, let me quickly recap the outbound connection to the destination. I got a Rail&Fly ticket to a nearby airport in Germany. The Airbus A380 to Dubai, a huge airplane with two floors, was approximately occupied 30%, maybe less. I use Atmosfair.de for CO2 compensation. Not sure who would need to pay for all those empty seats around me. The time difference between Germany and Dubai is 3 hours. I arrive after about 6 hours of flight and a short sleep of about 3 hours at 6 o’clock in the morning. On top of mandatory PCR tests to get on the flight, it seems that Dubai tests just again all tourists upon arrival. No need to wait for the result – it is delivered about 3 hours later to you via SMS. I wonder what happens in the case of a positive test. I didn’t find out as my test was fortunately negative. 😁
The tour operator takes care of the transfer to the cruise ship. They also handle all subsequent paper work to enter the destination countries, that means here the United Arabic Emirates (UAE) and Oman. For this purpose, they require all passengers to hand over their passports before getting on board of the ship. After some hesitation and some questioning, I resign and hand out my passport, too.
Once on board of the ship, I drop off my luggage, eat something and head out to explore Dubai on my own.
I have made no efforts to prepare my first day in Dubai. I only downloaded a city map. The cruise ship harbour of Dubai is a 30 min car ride from the lively city centre. So I join two other passengers, who happen to know Dubai very well, for a shared taxi ride to their destination, the Dubai Mall. I am amazed by all those skyscrappers on the way. Every street is a high way with 3 lanes minimum it seems.
Before the other passengers leave, they point me to the famous aquarium in the mall. The aquarium spans several floors and is indeed impressive. Otherwise, the mall has everything you expect: fashion store H&M, bakery PAUL, L’Occitane en Provence, Birkenstock, Decathlon, the best of the best!
I consider to buy a camera lens for landscape photography. I find an electronics store, who got one for 1500€. Now, the deal I found earlier in the Internet to rent this lens for 100€ per week appears under a totally different, much better light. So I head towards the rental office in the South of Dubai. Long tubes hanging 6 meters over the highways bring you from the mall to the metro. The metro is in fact an aerial railway. And it is packed. The people seem to be from all over the world – later I learn that Dubai has about 80% immigrants.
The photography rental shop is in the 8th floor of a skyscraper. Fortunately, they still have the lens in stock. Unfortunately, I cannot get it, because they ask as a deposit for a) my passport, b) amount blocked with a local credit card, or c) 1200€ cash in local currency. The tour operator has my passport, I don’t have a local credit card and feel uneasy to withdraw and hand in 1500€ in cash. I am frustrated. 😩 I envy all those people who have spare passports due to their second nationality. People ask me sometimes why I wanted to become also a French citizen. I just got one argument more. I decide to find another store that possibly rents lenses. I end up in the Dubai Marina Mall and buy eventually for about 280€ an entry-level lens.1
On my way out, I discover a city e-bike self-rental station. I quickly sign up for a day plan (4€) and cycle to the Dubai Marina. I make a lot of photos.
Then, I head towards the artificial lagoon The Palm Jumeirah. Unfortunately, I am on the wrong way of the highway and after half an hour to find a path I realise that there seems to be just no way to cross it with a bike. It happened again two more times later that day.
Eventually, I give up the search and get on the Monorail panorama train to access The Palm. The third stop is integrated in, guess what, the Nakheel Mall. The Mall features a hotel with a restaurant on the top called The View. The ticket for the lift after 4:00 PM (sunset time) is 40% more. It is now 4:03 PM. I decide to keep it for next time (:wave: Vincent, Sara), hop on the Monorail, and get to the next stop: Atlantis Aquaventure. It turns out, the part freely accessible is mostly a mall. Again! They have also an aquapark and a hotel and Dolphins.
I leave The Palm, find a city e-bike and get back to the harbour. After 90 minutes cycling without a break, I check the map. This city is huge and I am nowhere close to the harbour. On the way to bring the bike back to a rental station, I discover the Dubai Canal and the newly constructed canalfront promenade and bridges. Though I am quite exhausted, I spend another hour to make photos. Eventually, I get a taxi that brings me back to the harbour. At midnight, the ship leaves Dubai for the next stop in Abu Dhabi.
For the curious: I got the Nikon AF-P DX Nikkor 10-22mm f/4.5-5.6G VR. Basically all subsequent photos are shot with either that lens on a Nikon D7100 body or with my Oneplus 7 Pro phone. ↩︎
Some companies offer their employees to access their corporate computer work space remotely using a remote desktop connection. The company Citrix provides software for such a connection. To connect, the employees need the software Citrix Workspace on their terminal devices. The company provides on their download page also files for Linux including openSUSE. Unfortunately, their version 1912 from 12 December 2019 did not just work on my openSUSE Tumbleweed 64bit computer (and earlier versions I tried neither).
First, I tried to install the software package from the vendor.
zypper in ICAClient-suse-19.12.0.19-0.x86_64.rpm
/usr/lib64/ICAClient/wfica -icaroot /opt/Citrix/ICAClient configuration-file.ica
segmentation fault (core dumped)
Then, I tried to install somebody’s own software package. Note that this requires trust or a review of the package.
libcrypto.so.1.0.0
.Afterwards, the application did not segfault any longer. However, it produced an error due to a missing certificate from the GlobalSign Root CA.
/tmp
. There is more than one. Look for the one in the tree GlobalSign nv-sa./usr/lib64/ICAClient/keystore/cacerts
. Navigate in the terminal to this folder and copy the certificate file in it. Then use chown root:root [file.crt]
and chmod 444 [file.crt]
to adapt file ownership and properties.Afterwards, Citrix Workspace worked for me. If I have too much time, I will try to use the vendor package and see if I still get the segfault considering that I have now openssl 1.0.0 installed.
There are two options to get data from your host OS to your Citrix client:
/usr/lib64/ICAClient/util/configmgr -icaroot /usr/lib64/ICAClient
. In the tool, mappings are configured in the tab file access.At some point, the setup broke due to an expiring SSL certificate I believe. After some time trying, I ended up with the following easy setup:
zypper rm ICAClient
ICAClient-suse-21.1.0.14-0.x86_64.rpm
zypper install [your folder]/ICAClient-suse-21.1.0.14-0.x86_64.rpm
mv /opt/Citrix/ICAClient/keystore/cacerts{,~}
ln -sv /etc/ssl/certs /opt/Citrix/ICAClient/keystore/cacerts
This did the trick!
If the mountain won’t come to the prophet, then the prophet must go to the mountain.1
If the mountain is very far away, the prophet is well advised to rather ride by horse, Uber or even take a plane. I was lucky to get the chance to use the car of the family. Unfortunately, this car was not in Brussels, but in North Rhine-Westphalia in Germany. Though, a car is much smaller and more mobile than a mountain, it was again me who had to go get the car.
I avoid cars whenever possible. I fly more often than I drive a car. So when I got to Germany for the hand-over of the car, we drive first together to the gas station, so I can verify that I can still manage to refill the tank, which I have not done for years.
The estimated total driving time to the Alps amounts to 8 hours. Considering that my longest drive was 4 hours from Berlin to the Baltic Sea, I decide to split the ride in two and stay halfway overnight. On my way to the Alps I make two stops in Germany for grocery shopping. Fritz-Kola, Bionade, different kinds of nuts, Klöße, Spätzle and few other specialities that are either very expensive in France or not available at all. In the end, little space is left in the car.
I am on the Autobahn. I listen to the album A Night At The Opera from Queen. I listen to the album twice. When I reached again the last song God Save The Queen, still much Autobahn is ahead of me. I have to concentrate at every motorway junction. In between, at 120 km/h, time seems to halt. Nothing happens. FLASH. Yikes! Apparently, I did not slow down fast enough and got caught by a speed camera. This would be the first time in my life I get an administrative fine.
The next minutes I observe the real-time fuel consumption per 100 km. I heard once car engines would be designed to be most fuel-efficient at an engine-specific speed. Without the non-conservative force friction, I would only need fuel for the initial acceleration and to climb the mountains. In the plane, my consumption should be close to 0. Hence, the majority of the consumption is required to balance friction to keep the speed. During my studies, we learned that friction is a function of the speed and includes significant higher order contributions in . That means,
Consequently, the fuel consumption is optimal for a speed . :thinking: I keep on driving.
I listen to the German audio book Ich bin dann mal weg (english: I’m Off Then) on Spotify. Hape Kerkling reads his report from his pilgrim journey on the Camino de Santiago. I am also somehow on a trip to find rest. I wonder if I should also walk the Camino. Eventually, I arrive at my planned stop in a youth hostel in the south of Germany.
The youth hostel is quite busy. It accommodates 200 high school students from an international school in Germany, a seminar group of enthusiasts of frequency therapy led by their guru living in Canada and touring once every year in Europe, and a group of professional bicycle sport trainers. I get along best with the sports trainers and we start discussing social justice. With the question how a different salary of public servants in hospitals and schools can be justified, I go to bed.
Next morning, I prepare myself for entering Switzerland. The use of their motorways requires a car vignette that can be bought at the border. I leave the hostel.
Proud of me, I refill the car at the gas station all alone. Then, I pass through the city to get on the motorway to Switzerland. FLASH. :unamused: Most likely, I got again trapped by a speed camera. The forest of street signs 30 km/h in the evening changed to ordinary 30 km/h signs while the street with 3 lanes per direction resembles a motor road. I feel cheated.
Without any further issues, I reach the border between Germany and Switzerland and buy a car vignette. The sales person is very friendly and attaches the new vignette next to the collection of old vignettes with the oldest dating back to 2012. These vignettes give proof of my father’s travels—like backpack travellers would put stickers of their discovered countries on their backpack.
My itinerary to the French Alps brings me to few Swiss cities I do not know yet. I decide to leave the motorway and have a walk downtown in Basel. Not 5 minutes later, the police knocks on my window while I wait at an intersection. We greet each other friendly. Then, they ask me to park close by for an inspection. This is the first time in my life that I am subject to police scrutiny as a car driver. They let me know that expired vignettes must be removed in Switzerland. The penalty is 265€. Fortunately, the police officers are in a good mood and propose that I remove the vignettes as soon as possible and do not pay the penalty. I am easily convinced of this plan and accept without further ado. Then, I discover in Basel its difficulties to find a parking spot in the city centre. Eventually, I can have a sort walk. All in all, I do not like Basel so much.
I continue my journey to the French Alps and leave again the highway to the city centre of the Swiss de facto capital Bern. Given my experience in Basel, I decide to give up finding a free parking spot and take the central parking house right next to the old town. The sun shines. Bern was founded in the Middle Age and the centre still reflects this charm.
While strawling to the old city, I discover an Einstein museum installed in his former flat. In the museum I learn that Einstein’ first and only daughter has been lost. I wonder if this daughter is still alive and knows her famous parents. On the way out, I notice the guest book of the museum, in which people—many apparently greedy for significance after the recent impressions from Einstein’s live–let other people know of their visit. For many, this is just a guest book. For me, it is a filing system containing personal data subject to data protection laws. As I need to arrive in the Alps before sunset, I decide to not inform the only present staff member of this recent discovery and just leave.
Eventually, I leave Bern and head towards Martigny and then Chamonix. In between, the motorway becomes a thin black slope that winds upwards the mountains in sharp curves. The temperature drops under the freezing point. Fortunately, the road is dry and clean. I continue with caution—much to the regret of those presumably local drivers that queue with little distance behind me. I can only relax again once I found enough space on the side to let them pass.
On the last kilometres before my destination, I pick up a hitch-hiker finishing the service as snow rescue patrol. I get some tips on the best skiing spots before our paths divide again. A quarter hour later, I arrive in my valley. I consider twice next time if I should not rather take the train.
According to Wiktionary, the prophet is in the Turkish proverb retold by Francis Bacon actually Muhammad. The form I know has been generalised to all prophets. They have a common problem here. Maybe they should have asked Atlas, who was used to carry heavy stuff. :thinking: ↩︎
$HOME/bin/get-cookies.js
with the executive bit set via chmod +x $HOME/bin/get-cookies
. It relies on the library puppeteer to control a browser instance of headless Chromium, which must be installed first via npm i puppeteer
.
Then, you can call get-cookies.js https://google.com
to get all installed cookies upon request of the page given as a parameter (here: https://google.com
). Note that Puppeteer creates its own Chromium user profile which it cleans up on every run.
#!/usr/bin/env node
const puppeteer = require('puppeteer');
const url = process.argv[2];
(async () => {
const browser = await puppeteer.launch({ headless: true, args: ['--disable-dev-shm-usage'] });
try {
const page = await browser.newPage();
await page.goto(url, { waitUntil: 'networkidle2' });
var cookies = await page._client.send('Network.getAllCookies');
cookies = cookies.cookies.map( cookie => {
cookie.expiresUTC = new Date(cookie.expires * 1000);
return cookie;
});
var persistantCookies = cookies.filter(c => {
return !c.session;
});
console.log({
persistantCookies: persistantCookies,
persistantCookiesCount: persistantCookies.length,
});
} catch(error) {
console.error(error);
} finally {
await browser.close();
}
})();