Some unrelated thoughts2020-01-02T11:58:08+00:00https://www.pending.io/Enrico Trögerhttps://www.pending.io/Optimize LEDs on Freifunk routershttps://www.pending.io/blog/optimize-leds-on-freifunk-routers/Enrico Trögerhttps://www.pending.io/2019-01-26T12:49:42+00:002019-01-26T12:49:42+00:00This is about the default configuration of the LEDs on router devices with the Freifunk / Gluon firmware (https://www.freifunk.net). For example, on my “TP-Link TL-WR841N/ND v9” the WAN and wifi LEDs are constantly blinking as there is constant traffic on the according devices. As this is by design in the Freifunk network, I’m not interested in the blinking. It is just distracting.

After I noticed that it is very simple to control those LEDs with the Gluon firmware (actually this is thanks to the OpenWRT base), it was just a little fun to write a script to let the LEDs show information I want to see:

  • LAN port LEDs are off
  • WAN LED shows whether fastd is running by checking if there is a Freifunk gateway assigned, this is very basic check indicator whether all is working fine
  • QSS LED shows the health of the system (memory, disk space, CPU load)
  • Wifi LED shows whether any clients are connected

To lower distracting, the WAN and QSS LEDs are off by default and are only set to blinking mode if there is something wrong. The wifi LED glows constantly as long as there is at least client connected to the Freifunk network.

The health checks for the QSS LED consist of:

  • CPU load below 0.8
  • At least three MB of memory available
  • NVRAM usage is not above 85%

Once any of this checks fails, the LED is set to blinking.

After all, this results in a device with no blinking LEDs if everything is fine and ideally two LEDs constantly glowing (Power and wifi).

The script can be found on github.com/eht16/freifunk-scripts/

To disable all controllable LEDs by default, use the following commands (need to be done only once):

uci set system.led_lan1.trigger='none'
uci set system.led_lan1.default=0
uci set system.led_lan2.trigger='none'
uci set system.led_lan2.default=0
uci set system.led_lan3.trigger='none'
uci set system.led_lan3.default=0
uci set system.led_lan4.trigger='none'
uci set system.led_lan4.default=0
uci set system.led_wlan.trigger='none'
uci set system.led_wlan.default=0

Finally, add a cronjob to run it periodically:

1,3,5,7,9,11,13,15,17,19,21,23,25,27,29,31,33,35,37,39,41,43,45,47,49,51,53,55,57,59 * * * * /bin/sh /etc/set_led_status.sh

Put this line in /usr/lib/micron.d/set_led_status (this is the only supported location for cronjobs on Gluon and micrond does not support */2).

Disclaimer: tested only on TP-Link TL-WR841N/ND v9 but should also work on similar devices.

Happy (no longer) blinking!

]]>
No more Compulsory Routers - Routerfreiheit bei NetColognehttps://www.pending.io/blog/no-more-compulsory-routers/Enrico Trögerhttps://www.pending.io/2016-09-08T00:00:30+00:002016-09-08T00:00:30+00:00tl;dr: this post is about the free choice of routers to be used on home broadband connections in Germany. Since August 1st, a new law enables customers to receive credentials for the DSL connections as well as for VoIP services in order to use end devices (i.e. routers) of their choice. Since all of the following information are specific to one German local provider, NetCologne, the rest of the post is in German.

Schritt 1: Anfragen der Zugangsdaten

Laut dem neuen Gesetzt sollen Neukunden die Zugangsdaten unaufgefordert vom Anbieter erhalten. In meinem Fall handelt es sich um einen bestehenden Vertrag seit 2010 und daher habe ich explizit bei NetCologne die Zugangsdaten angefragt. Man benötigt hierbei zwei Arten von Zugangsdaten: für die DSL-Verbindung zum Internet und für die VoIP-Telefonie zum Telefonieren. Das Beantragen der Daten hat bei mir ganz einfach via Kontaktformular auf netcologne.de funktioniert. Nach einem Tag konnte ich unter https://einstellungen.netcologne.de/ meine VoIP-Zugangsdaten einsehen und auch das Passwort ändern.

Allerdings: bei den DSL-Zugangsdaten wurden die alten Zugangsdaten ersetzt und die neuen via Post zugeschickt. Die Bestätigung darüber erhielt ich an einem Mittwoch, die Zugangsdaten wurden am darauf folgenden Donnerstag morgen geändert, der Brief kam allerdings erst am Dienstag. NetCologne hat den Brief laut Stempel an dem genannten Donnerstag verschickt, allerdings hat es wohl bei dem Brief-Dienstleister “postcon” etliche Tage gedauert um wenige hunderte Meter Luftlinie zwischen Provider und meiner Adresse zurückzulegen. In der Zeit war mein Anschluss aber offline, da die alte Netconnect-Box von NetCologne die neuen Daten genauso wenig kannte wie ich. Wenn man etwas Geduld hat, kann man bei der Hotline anrufen und auch dort die neuen Zugangsdaten erfragen.

Sonst muss man noch wissen, dass bei NetCologne die DSL-Verbindung und die VoIP-Telefonie in zwei getrennten VLans realisiert ist. Das ist VLan 10 für DSL und VLan 20 für VoIP. Diese Daten findet man unter: https://www.netcologne.de/selbsteinrichten/vdsl.

Schritt 2: Test mit FRITZ!Box 7490

Das Einrichten der FRITZ!Box 7490 geht eigentlich recht einfach. Als Technik-begeisterter Mensch habe ich direkt am Anfang in der FRITZ!Box die “Erweiterte Ansicht” aktiviert (rechts oben). Eventuell fehlen in der normalen Ansicht einige Einstellungen.

DSL

Zuerst wählt man im Einrichtungsassistent als Internetanbieter “NetCologne / NetAachen” und als Verbindungstyp “NetCologne / NetAachen VDSL-Anschluss” aus.

Später kann man unter “Internet / Zugangsdaten” die oben erhaltenen Zugangsdaten für die DSL-Verbindung angeben. Dann sollte auch direkt der Internetzugang bereits funktionieren. Die angesprochenen VLan-Besonderheiten sind hier bereits vorkonfiguriert durch die Auswahl des Internetanbieters.

FRITZ!Box DSL Einstellungen

Voice-over-IP / Telefonie

Für die Telefonie richtet man zuerst die Anschlusseinstellungen ein. Hier habe ich überwiegend die Voreinstellungen übernommen und den Rest durch Trial&Error ausprobiert. Die Screenshots zeigen alle relevanten Einstellungen. Wichtig ist hier die VLan-ID 20, sollte sie nicht schon vorausgewählt sein.

FRITZ!Box SIP Anschluss Einstellungen FRITZ!Box SIP Anschluss Einstellungen

Danach geht es daran, eine Telefonnummer einzurichten. Die Einstellungen hier sind alle nur für “analoge Telefonie”, auch wenn es das so ja nicht mehr gibt mit VoIP. Meine damit, mein Anschluss unterstützt kein ISDN, daher sind auf den Screenshots nur die Einstellungen für nicht-ISDN-Anschlüsse und damit auch nur für eine Nummer zu sehen.

Als “Telefonie-Anbieter” habe ich “anderer Anbieter” ausgewählt, da kein passender in der Liste vorhanden war. Später wird dort als Name automatisch dann “sip.netcologne.de” angezeigt.

FRITZ!Box SIP Nummern Einstellungen FRITZ!Box SIP Nummern Einstellungen

Damit sollte der Telefonie nichts mehr im Wege stehen. Sollten irgendwo Probleme auftreten, findet man im Ereignisprotokoll vielleicht nützliche Hinweise.

Schritt 3: Finales Setup mit VDSL-Modem und Raspberry Pi

Das oben beschriebene Setup mit einer FRITZ!Box war nur zum Testen, insbesondere bzgl. VoIP-Telefonie.

Mein eigentliches Ziel-Setup ist, die Netconnect-Box von NetCologne loszuwerden und als Router einen Raspberry Pi zu verwenden, der selbst die DSL-Verbindung zum Zugang zum Internet aufbaut (via pppoe).

Dazu benötigt man aber so oder so ein DSL-Modem, in meinem Fall ein VDSL2-Modem. Neugeräte findet man nur schwer auf dem Markt, aber speziell im NetCologne-Versorgungsgebiet gibt es hin und wieder noch ältere Geräte von Zyxel. Konkret benötigt man das Modell P-800 series.

Ich hatte Glück und konnte vor kurzem ein solches Gerät in einem Kleinanzeigenmarkt finden und für kleines Geld (fünf Euro) erstehen.

VDSL2-Modem Zyxel P-800 series VDSL2-Modem Zyxel P-800 series

Meine Variante ist zwar eigentlich für ISDN-Anschlüsse gedacht (erkennbar an dem Zusatz “-I3” in der Modellnummer auf der Rückseite), funktioniert aber auch an meinem Anschluss ohne ISDN prima. Solche Geräte findet man hin und wieder auch auf eBay oder eben in lokalen Kleinanzeigenmärkten. Vielleicht sind auch andere VDSL2-Modems geeignet, wichtig dabei ist nur, dass sie die entsprechenden VDSL-Profile unterstützen (3).

Das Zyxel-Modem hat einen RJ45-Anschluss für die DSL-Leitung, also Richtung TAE-Dose in der Wand sowie einen RJ45-Anschluss zum Anschluss an einen Router. Wie erwähnt, in meinem Fall ist der Router ein Raspberry Pi.

Da der Pi nur einen Ethernet-Port hat, mache ich mir zu Nutze, dass bei PPPoE eben die PPP-Pakete durch normales Ethernet fließen und somit ich das DSL-Modem sowie den Pi über einen Switch miteinander verbunden habe. Somit geht durch den Ethernet-Port am Pi sowohl der PPP-Traffic wie auch der normale Ethernet-Traffic ins LAN. Bei meinem 25 MBit Anschluss reicht da die Kapazität von 100 MBit am Pi sowie auch dessen Rechenleistung bequem aus. Bei einem 50 oder 100 MBit Anschluss von NetCologne wird es da natürlich etwas eng und man kann faktisch nie die gesamte Bandbreite ausnutzen.

Zu beachten ist noch, dass die PPPoE-Pakete aus dem DSL-Modem bei NetCologne in einem getaggten VLan mit der ID 10 unterwegs sind. D.h. der normale, in der Regel ungetaggte Traffic im Ethernet sieht die Pakete gar nicht. Damit der Pi die Pakete empfangen kann, muss man dort ein separates Interface mit der VLan ID hochfahren. Unter Raspbian geht das recht einfach mit folgender Ergänzung in der Datei /etc/nework/interfaces:

auto eth0.10
iface eth0.10 inet static
  address 10.0.2.1
  netmask 255.255.255.0
  vlan-raw-device eth0

Damit wird ein neues Interface eth0.10 erzeugt, welches im richtigen VLan ist. Über dieses Interface lässt man dann den pppd die DSL-Verbindung aufbauen. Für die pppd Konfiguration kann man überwiegend die Standard-Werte für DSL-Provider verwenden, wichtig sind folgende Einstellungen:

plugin rp-pppoe.so eth0.10
user "nc-username@netcologne.de"
+ipv6 ipv6cp-use-ipaddr

Die erste Zeile gibt am Ende das zu verwendende Interface an, hier unser vorher erzeugtes eth0.10. Die Letzte Zeile aktiviert den IPv6-Support (NetCologne bietet natives IPv6 auf Anfrage an).

Dann folgt noch ein Eintrag in der Datei /etc/ppp/pap-secrets für die Zugangsdaten:

"nc-username@netcologne.de" * "password"

Auf die weiteren Details bzgl. Routing und Firewall-Setup mit pppd und iptables gehe ich hier nicht weiter ein, dazu gibt es im Netz genügend Howtos und Beispiele.

Da ich nicht über VoIP telefoniere, habe ich hier auch nichts auf dem PI konfiguriert.

Fazit - oder warum das Ganze?

Erfreulicherweise erhält man bei NetCologne auf Anfrage alle notwendigen Zugangsdaten und Konfigurationshinweise (Stichwort VLans). Damit hat man tatsächlich etwas wie “Routerfreiheit”.

Nun kann man sich zu recht fragen: warum das Ganze? Warum so viel Aufwand und Zeit investieren, nur um nach wie vor Surfen zu können. Hier drei Argumente:

  • nur freie Software im Einsatz (bis auf das DSL-Modem, aber das hat keine Logik)
  • keine Fernkontrolle durch den Provider mehr möglich
  • volle Kontrolle über meinen Internet-Anschluss (bzgl. Routing, Firewalling, IPv6, …)

Durch die Wahl des Routers bin ich nicht mehr an das vom Provider bereitgestellte Stück Hardware gebunden. Wobei oft die verwendete Hardware gar nicht schlecht ist, aber durch Provider-eigene Firmware wiederum im Funktionsumfang stark beschränkt ist. Zudem funktioniert so nicht mehr die - bei NetCologne nett genannte “Auto-Provisionierung” - Fernwartung durch die Provider. Das ist ein Verfahren um die bereitgestellten Geräte aus der Ferne zu konfigurieren und dem Kunden somit Arbeit abzunehmen. Der Standard nennt sich TR-069. Aber es ermöglicht so eben auch dem Provider das fernzusteuernde Gerät auszulesen, die Einstellungen zu ändern oder zurückzusetzen (mir so passiert). Noch viel schlimmer aber, somit hat der Provider bereits einen Fuß in meinem privaten Netzwerk, den mit dem ist das Gerät zwangsläufig direkt verbunden.

Ich fühle mich nun etwas freier.

Vielen Dank an die FSFE und weitere Aktivisten, die sich für das neue Gesetz mit viel Geduld und Engagement eingesetzt haben.

]]>
Report generator for Logstash parse failureshttps://www.pending.io/blog/report-generator-for-logstash-parse-failures/Enrico Trögerhttps://www.pending.io/2016-04-17T14:20:23+00:002016-04-17T14:20:23+00:00Since quite some I’m using Logstash (actually the whole ELK stack) for collecting, enriching and storing log events from various servers and applications.

While Logstash is great for this job, sometimes it cannot parse some log events because the events have an unknown formatting or my parsing rules don’t match well enough.

I used to manually search for such parse failures in the stored events from time to time. While this worked basically, it required me to remember “ah, maybe I should have a look for parse failures”. This happened not that often, so sometimes I had many parse failures for a long time although they were easy to fix.

Finally, I wrote a simple script to generate a simple report of parse failures happened in the last seven days. This script is executed as cronjob every seven days and will send the report via email to me.
Yay, now it’s really easy: I just need to read that tiny mail with parse failures and then decide whether I want to fix them or rather not.

In case anyone is interested, the script can be downloaded here: report_logstash_parse_failures.py

Additionally, for my convenience, I set up a Saved Search in Kibana:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[
  {
    "_id": "Parse-Failures",
    "_type": "search",
    "_source": {
      "title": "Parse Failures",
      "description": "",
      "hits": 0,
      "columns": [
        "tags",
        "logsource",
        "program",
        "message"
      ],
      "sort": [
        "@timestamp",
        "desc"
      ],
      "version": 1,
      "kibanaSavedObjectMeta": {
        "searchSourceJSON": "{\"index\":\"logstash-*\",\"filter\":[{\"$state\":{\"store\":\"appState\"},\"meta\":{\"alias\":\"Parse Failures\",\"disabled\":false,\"index\":\"logstash-*\",\"key\":\"query\",\"negate\":false,\"value\":\"{\\\"filtered\\\":{\\\"filter\\\":{\\\"terms\\\":{\\\"tags\\\":[\\\"_grokparsefailure\\\",\\\"_jsonparsefailure\\\",\\\"_log_level_normalization_failed\\\"]}}}}\"},\"query\":{\"filtered\":{\"filter\":{\"terms\":{\"tags\":[\"_grokparsefailure\",\"_jsonparsefailure\",\"_log_level_normalization_failed\"]}}}}}],\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"post_tags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"*\":{}},\"require_field_match\":false,\"fragment_size\":2147483647},\"query\":{\"query_string\":{\"analyze_wildcard\":true,\"query\":\"*\"}}}"
      }
    }
  }
]

The Saved Search can be imported directly into Kibana after downloading it from here: kibana_saved_search_parse_failures.json

Happy Logging!

]]>
Easily check SSL certificates on websiteshttps://www.pending.io/blog/easily-check-ssl-certificates-on-websites/Enrico Trögerhttps://www.pending.io/2016-01-23T12:55:31+00:002016-01-23T12:55:31+00:00Here is a simple script to quickly get an overview of the SSL certificates used on various websites, e.g. to check expiration or issuer. For me, this helped a lot while migrating website certificates to Let’s Encrypt.

The script is loosely based on “Zext ssl cert.sh” ( see https://www.zabbix.org/wiki/Docs/howto/ssl_certificate_check) but slightly rewritten for the purpose of bulk checking and listing more deails of certificates. The openssl command is required for the script to work properly.

After you have downloaded the script, simply edit the domains at the top of the file to match your websites. DOMAINS is an array which is to be filled with strings in the format:

domain port [protocol]

Domain and port should be self-explanatory, for simple website SSL checks use port 443 (aka https). You can also check SMTP or IMAP servers, then use smtp resp. imap as protocol and 25 resp. 143 for port. If you specify the protocol option, openssl will use STARTTLS for the connection attempt.

ssl_website_check.sh in action

ssl_website_check.sh in action

Get the script on https://github.com/eht16/ssl_website_check.sh.

]]>
Windows binary for libgit2 0.22.2 (UPDATE: 0.23.2 available)https://www.pending.io/blog/windows-binary-for-libgit2-0222/Enrico Trögerhttps://www.pending.io/2015-06-05T16:28:10+00:002015-06-05T16:28:10+00:00I recently needed a Windows binary of libgit2 to build some Geany code against the library. Since I could not find any reliable and trustworthy source for Windows builds of libgit2, I had to compile it myself. In order to save others from this, I want to publish the result.

You can download the ZIP archive containing the Windows DLL library as well as necessary C header files from:

https://download.geany.org/contrib/libgit2-0.22.2.zip

The binary is digitally signed with my cacert.org code signing certificate. Fingerprint for validation: 93 09 b4 6c e5 da 01 8d d6 29 fe 6a b6 44 7a c0 f0 d1 28 81

Please note that the binary is provided as is, without warranty of any kind. Use it at your own risk.

UPDATE 2015-09-27: I uploaded a build of libgit2 0.23.2 to https://download.geany.org/contrib/libgit2-0.23.2.zip

]]>
"Memory Slots" (or: how to save memory in Python)https://www.pending.io/blog/python-slots/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00I recently sort of locked my workstation while I tried to query a SOAP webservice at work about for 24000 entities. While my request was as simple as “list all entities you know” to the server, its response was heavy, obviously.

The plain XML in the SOAP response of the service was just about 30 megabytes, not that much. However, while Suds (website, the SOAP client we are using) is parsing the received XML, it generates various objects for the found elements and attributes in the response. This is basically pretty cool because you get a clean object back from Suds representing the structure of the service response. The only downside is that probably nobody tested Suds with so much data, to process the 30 megabytes XML data, it takes up to 1,7 gigabytes of RAM (RSS). This is quite a lot, even for 24000 entities.

So first I debugged my client code around Suds but it turned out that Suds itself hogs the memory. Thanks to objgraph (and its awesome show_most_common_types() method I found that Suds creates about 700000 suds.sax.element.Element objects and about 120000 suds.sax.attribute.Attribute objects. I assume there are references to these objects kept which prevent the garbage collector from freeing them. Though this is just a guess, since I had only limited time to debug this issue, I could not find the real cause. However, thanks to a colleague and his great hint, I could reduce the (RSS) memory usage of Suds by using __slots__ for the Element and Attribute to about 740 megabytes, so more than the half of memory was saved.

This is cool, isn’t it?

If you don’t know what __slots__ do in Python, the documentation explains it in detail. In short: using __slots__ prevents Python of creating a dictionary for each instance to hold the instance attributes and instead just reserves the necessary memory per instance to hold the values of the instance attributes.

However, as often with cool things, __slots__ also have downsides as explained in the documentation and on a Stack Overflow discussion. One of the most obvious disadvantage is that you can’t easily pickle objects with __slots___. Though it’s probably still possible by using custom __getstate__ and __setstate__ methods. Another one is you can’t add new attributes to instances of classes using __slots__.

At least for my case, the given limitations were acceptable and so as a temporary solution until the real memory hog problem in Suds is found and fixed, __slots__ work quite good.

(I will send patches to Suds once I really know it doesn’t break other things and when I find to do it.)

Update: I sent my changes to upstream together with some other improvements: https://fedorahosted.org/suds/ticket/445

]]>
Add CAcert root certificate to Firefox OShttps://www.pending.io/blog/add-cacert-root-certificate-to-firefox-os/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00While being quite happy with my new Firefox OS phone so far, the biggest stopper for me was that, like all Mozilla products, the root certificate of CAcert was not included and so I could not access sites using certificates assured by CAcert.

Recent versions of Gaia allow to accept untrusted site certificates in the browser but in case you want to use an IMAP server or Caldav server which is using a CAcert assured certificate, you are still stuck.

Based on a post by Carmen Jiménez Cabezas, I wrote a script to read the certificate database from the phone (via adb), add some certificates and then write the database back to the phone. After this procedure, the CAcert root certificate (or any other) are known by the phone and can be used. This enabled me to access my own IMAP server via SSL from the Email app and also use a self-hosted groupware as Caldav server for the Calendar app via HTTPS.

Save the following script somewhere on your system (Download the script):

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
#!/bin/bash

CERT_DIR=certs
ROOT_DIR_DB=/data/b2g/mozilla
CERT=cert9.db
KEY=key4.db
PKCS11=pkcs11.txt
DB_DIR=`adb shell "ls -d ${ROOT_DIR_DB}/*.default 2>/dev/null" | sed "s/default.*$/default/g"`

if [ "${DB_DIR}" = "" ]; then
  echo "Profile directory does not exists. Please start the b2g process at
least once before running this script."
  exit 1
fi

function log
{
	GREEN="\E[32m"
	RESET="\033[00;00m"
	echo -e "${GREEN}$1${RESET}"
}

# cleanup
rm -f ./$CERT
rm -f ./$KEY
rm -f ./$PKCS11

# pull files from phone
log "getting ${CERT}"
adb pull ${DB_DIR}/${CERT} .
log "getting ${KEY}"
adb pull ${DB_DIR}/${KEY} .
log "getting ${PKCS11}"
adb pull ${DB_DIR}/${PKCS11} .

# clear password and add certificates
log "set password (hit enter twice to set an empty password)"
certutil -d 'sql:.' -N

log "adding certificats"
for i in ${CERT_DIR}/*
do
  log "Adding certificate $i"
  certutil -d 'sql:.' -A -n "`basename $i`" -t "C,C,TC" -i $i
done

# push files to phone
log "stopping b2g"
adb shell stop b2g

log "copying ${CERT}"
adb push ./${CERT} ${DB_DIR}/${CERT}
log "copying ${KEY}"
adb push ./${KEY} ${DB_DIR}/${KEY}
log "copying ${PKCS11}"
adb push ./${PKCS11} ${DB_DIR}/${PKCS11}

log "starting b2g"
adb shell start b2g

log "Finished."

Once done, add a new directory in the directory where you stored the script and place the certificates which you want to add to the phone’s database in the sub directory “certs”. For CAcert, this would be the class 3 root certificate in PEM format as found on the CAcert website.

Then simply run the script.

Note: before running the script you need to enable ‘Remote debugging’ in the Developer settings menu and connect your phone with your PC using a USB cable (or more general: get adb working).

Update: mcnesium created a GIT repository to further maintain this script at https://github.com/mcnesium/b2g-certificates. Please get the latest version from there.

]]>
Build GIT version of Xfce4https://www.pending.io/blog/build-xfce4-from-git/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00This is a little build script for Xfce4 to fetch and compile the sources from GIT (master). Short instructions:

You should start with an empty directory, where you put this script. Edit the script and modify the list of packages or modules, you want to install.

Then run the script with:

./xfce4-build.sh init

this fetches the sources from the Xfce GIT server and:

./xfce4-build.sh build

configures, builds and installs the sources. For more information read the script (it’s really simple). The script can be downloaded at https://files.uvena.de/xfce4-build.sh.

]]>
df -h + mount = dihttps://www.pending.io/blog/df-and-mount-is-di/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00There is a command line tool called di which basically combines the information of df and mount.

I like this tool very much because it quickly prints lots of useful information about the existing filesystems on system and about their type, usage and mountpoint.

From the author’s website:

‘di’ is a disk information utility, displaying everything (and more) that your ‘df’ command does.

This is pretty much the story, di without any parameters already outputs human-readable filesystems sizes (like df -h) and properly formatted. Additionally, by default it filters pseudo mountpoints like those of a zero total space or with a ‘none’ filesystem type.

It’s a great tool to quickly check the disk status of a system.

Easy to install on Debian/Ubuntu systems with:

apt-get install di

or on CentOS/RHEL systems:

yum install di
di in action

di in action

Website: www.gentoo.com/di/

]]>
Faking a browser with Mechanizehttps://www.pending.io/blog/faking-a-browser-with-mechanize/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00Some time ago I tried to get the current balance of my prepaid mobile phone plan in order to implement some kind of notification if it is lower than 10 Euro. Unfortunately my ISP doesn’t offer any API to get this information automatically, so my first attempt was to send a POST request using cURL to login to the website of my ISP and then parse the HTML to gain the current balance.

While this is easy on many sites, on that particular site it was a bit tricky because the site is built with JSF (JavaServer Faces) and it seems form field names e.g. for the login form are auto-generated. So before sending the POST request to login, you first need to guess the form field names (though this works pretty easy using regular expressions on the raw HTML). But for some reason I couldn’t make a successful login to the site using cURL (and its cookie storage features) even though the sent requests of cURL looked identically to what I saw in Firebug. Anyway, I decided to look for some other solution before wasting even more time on fiddling with cURL.

I remembered some time ago I read an article about website grabbing using a Perl module called Mechanize. And luckily, Mechanize is also available as a Python package. So after installing it, I just played with the examples on the website of Mechanize and quickly got a result: mechanize.Browser is an awesome simple interface to start requests and access the returned response very easily. Especially easy is to iterate over all forms of the response by using mechanize.Browser’s forms attribute which is a simple generator. And then just pick that form you want to use, in my case it was the one to login. Then you simply fill the form fields with your data and call the submit() method of the Browser object. Mechanize will then send the POST request, receives the response and in case it is a redirect to another page, it also follows this redirect and present you with the HTML of the new page. Not to mention it also handles cookies automatically without any need to further configure it.

The rest was rather simple, passing the HTML retrieved using Mechanize to BeautifulSoup and find() the relevant HTML element with the data I was looking for.

To sum it up, if you want to do a little more than trivial GET requests in Python on arbitrary websites, have a look at Mechanize. It makes it so easy to perform browser tasks from within a script :). Yay.

]]>
Geany 1.22 is out!https://www.pending.io/blog/geany-122-is-out/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00If you didn’t notice it already, Geany 1.22 is out.

As usual, Geany got new features, more and updated translations and of course several fixes.

Read the release announcement on Geany’s website.

To get more detailed notes about changes, have a look at the Release Notes.

And in case someone is wondering about the version number, it got increased from 0.21 to 1.22 on purpose. The interested folks probably will find the relevant discussion(s) on the list archives of the devel mailing list. For the impatient ones: peeople like to see the stability of a program reflected in the version number and 1.x seems more stable than 0.x :).

]]>
Geany 1.23.1 has been releasedhttps://www.pending.io/blog/geany-1.23.1-has-been-released/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00A little heads up: Geany 1.23.1 has been released.

This is a bugfix release to address two regressions in the previous 1.23 release.

On Windows, after 1.23 it was no longer possible to open files from the command line because we added a change to change the working directory on Windows to the installation directory of Geany. This should have solved issues with plugins loading resources using relative paths. Unfortunately, it broke also opening files from the command line if relative paths were used.

The other fixed issue is was that the colouring of file tabs stopped working, e.g. when a file was changed the tab normally is coloured red or when a file is opened in read-only mode, the tab goes green.

With Geany 1.23.1 these bugs have been fixed.

So, go and get your copy from www.geany.org!

]]>
Github ReadMe Previewhttps://www.pending.io/blog/github-readme-preview/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00Have you ever wondered how your ReadMe you are just writing will show up on the GitHub website?

I did, a couple of times actually.

And it is just annoying to make a change, commit, reload the website, check, make another change, commit, …

Right now, it was annoying enough for me to check the web because I was pretty sure I was not the only one thinking so. And I was right:

github-preview.herokuapp.com

This is cool dynamic preview of your contents, making it damn easy to write a fancy ReadMe which remains easily readable as plain text as well as a rendered website on GitHub.

Thank you kei-s (Kei Shiratsuchi).

]]>
gpg-update-keyhttps://www.pending.io/blog/gpg-update-key/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00If you are using GnuPG, you may receive new signatures or do other changes to your GPG key and want to upload it to keyservers and/or your webserver to make it easier for other people to find it.

Since this is a tedious task, I wrote a little script “gpg-update-key” which does the job for me:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#!/bin/sh

KEY="CC03633F700990F2"
REMOTE_DIR="myserver.org:/var/www"

# upload the key to some key servers
gpg --keyserver subkeys.pgp.net --send-key ${KEY}
gpg --keyserver pool.sks-keyservers.net --send-key ${KEY}
gpg --keyserver pgp.uni-mainz.de --send-key ${KEY}
gpg --keyserver pgp.surfnet.nl --send-key ${KEY}

# export the key
gpg --armor --export ${KEY} > /tmp/pub.asc
gpg --export ${KEY} > /tmp/pub.key

scp /tmp/pub.asc /tmp/pub.key ${REMOTE_DIR}

Note that you should change the KEY and REMOTE_DIR variables otherwise the script won’t help you that much :). Also, the list of key servers to upload are just my personal favourites. Adjust them to your needs.

]]>
Monitoring UBC failcounts with Zabbix: the efficient wayhttps://www.pending.io/blog/zabbix-monitoring-openvz/Enrico Trögerhttps://www.pending.io/2014-08-19T22:44:42+00:002014-08-19T22:44:42+00:00A couple of times I searched for an efficient way to monitor the UBC failcounts of OpenVZ containers on a hardware node with Zabbix. Most solutions I found on the net were about using items for each UBC and container and watch for changes in the failcounts. But I never liked those many items, each item needs to read /proc/user_beancounters, process it and send the value back to the Zabbix server.

In the last months I used a quite sub-optimal monitoring: I had items for each UBC and for each container on the OpenVZ hardware node. The advantage was that once a UBC failcount increased, a trigger caused an event telling me exactly which UBC of which container increased and I could start investigating. But these were many items with many checks. I more and more felt too uncomfortable with wasting resources for monitoring rather than for real processing.

Recently I upgraded my Zabbix server from 1.8 to 2.0 and used the upgrade to renew my whole Zabbix setup, including using new, more modular templates, reviewed items and triggers and finally came up with something I quite like.
The last missing bit was the UBC topic.

And then I finally got the final idea: three years ago a wrote a very simple Python script (VzUbcMon) to periodically read /proc/user_beancounters and print out a summary of all UBC failcounts which increased since the last check, including the UBC name and the container ID. At that time, I didn’t yet use Zabbix for server monitoring and so this was my little poor man’s UBC monitoring :).
And now I can re-use this script and use it with the Zabbix agent on the hardware node to check for increased UBC failcounts and let a trigger create an event in case it happens. On README.zabbix I described in detail how it works and how to install, a Zabbix template is also included.

The idea is to have one item on the Zabbix side which calls VzUbcMon (and a little helper script to read /proc/user_beancounters as unprivileged zabbix user), VzUbcMon evaluates the UBC failcounts and returns a summary of changes to the Zabbix server which then fires an event in case anything changed and so the admin will get informed. One item for all UBC failcounts and since VzUbcMon returns text to the Zabbix server, detailed information about the increased failcount, the UBC and the container will be included in the event.

]]>